Skip to content

Wayne State University

Aim Higher

Nov 2 / Peter Hoffmann

The physics of election predictions

As we are coming down the final stretch (and it feels like a stretch) of the presidential election campaign, it may be of interest how various websites (NYTimes, 538, Princeton election consortium, etc.) predict the outcome of the elections. Election predictions are part art and part science. The art is to interpret the polls and come up with an averaging mechanism that provides an average margin and uncertainty for each state. Once that is done, coming up with an overall prediction is relatively straight-forward.
One common way is to use a Monte-Carlo simulation, which originated in Nuclear Physics as part of the Manhattan project. A Monte-Carlo simulation, as the name implies, rolls the roulette wheel (metaphorically speaking) to simulate multiple possible outcomes and then create distributions of how often each outcome occurs. In our case, we use this distribution to calculate the win probabilities for the candidates.
Let’s take an example. Let’s say that the current (as of 11/02/2016) average margin for Hilary Clinton to win Michigan is 7 ± 8. This means that the polls show Clinton on average 7 points ahead of Trump, and that this has an uncertainty (standard deviation) of 8 points. We therefore simulate the election outcome in Michigan by picking a random number from a normal distribution centered at 7 with a standard deviation of 8. If the random number is positive, we give the state to Clinton, if negative, to Trump. Whoever wins, gets Michigan’s 16 electoral votes.
Now, if we do the same for all states, and add up the electoral votes, we have one randomly simulated election based on the current poll numbers. Just one simulation will not make a very good prediction: Being based on random numbers, single runs of the prediction machine fluctuate a lot. The trick is to simulate the election many times (say 100,000 times) and build up a histogram. Many randomly simulated elections will then converge to a statistically very stable distribution. This distribution is then the basis to provide win probabilities for the candidates.
In summary, election predictions are based on averaged polls and (most of the time) simulated elections. For US presidential elections, the peculiar calculus of electoral votes comes into play, which shapes the prediction in an interesting way. For example, even though the probability to win Michigan or Maine may be about the same, Michigan will add 16 votes to the winners column, while Maine will add only 4 (ignoring here the split electoral votes in Maine, which only applies to tow states: Maine and Nebraska).

This little exercise was a project in my PHY 6750 “Applied Computational Methods” course. A timely, fun project for students.

Here then is today’s (as of 11/02/16) prediction, based on averaged poll numbers from the New York Times and 100,000 simulated elections:

figure_1

In case you cannot read it: The current prediction is 92.4% Clinton, 7.6% Trump, Average electoral votes for Clinton: 321 +- 36, most probable: 327

And here is the program (in Python), if you want to try it yourself (You’ll need an csv file with state names, margins, uncertainties and electoral votes):

Note: The blogging software removes indents, so you’ll have to put them back in…

“””
Created on Mon Oct 10 09:35:11 2016

@author: Peter Hoffmann
“””

import numpy as np
import numpy.random as rnd
import matplotlib.pyplot as plt
from time import time
t1=time()
fn = ‘StatePoll11-02-16.csv’ # csv file needed with columns: State name, polling margin (positive for Clinton, negative for Trump), standard deviation, electoral votes
dt=np.dtype([(‘State’,’S15′),(‘Margin’,’f4′),(‘Error’,’f4′),(‘EV’,’f4′)])
result = np.loadtxt(open(fn,’rb’),dt, delimiter=’,’,skiprows=1)
State = result[‘State’][:]
Margin = result[‘Margin’][:]
Error = result[‘Error’][:]
EV = result[‘EV’][:]
nD = len(EV)

N =100000 # Number of runs
EVTot=np.zeros(N)

DaysLeft = 5+15/24 # as of 9:30 am, 11/02/2016
Error = Error*1.02**DaysLeft # 2% random walk per day
WinState = np.zeros(nD)
for i in range(N): # this is the core of the program
p=np.array([rnd.normal(m,e,1) for (m,e) in zip(Margin,Error)]).ravel() #array of normally distributed random margins for all states
EVTot[i]=sum((p>0)*EV) # Total EVs assuming winner takes all
WinState=WinState+(p>0)*1 # Calculate win probs for each state

wi = 1
em = np.min(EVTot)
(hist,edges)=np.histogram(EVTot,bins=np.arange(em,np.max(EVTot),wi)) # build histogram
edges=edges[:-1]
hil=[edges>=270]
don=[edges<270]

plt.figure()
plt.bar(edges[hil]-wi/2, hist[hil], width=wi, color=’b’)
plt.bar(edges[don]-wi/2, hist[don], width=wi, color=’r’) # plot histogram

Chance_hil = np.sum(hist[hil])/N # calculate overall win probabilities
Chance_don = np.sum(hist[don])/N

sc = ‘Chance for Clinton=’+str(np.round(Chance_hil*100,2))+’%’
st = ‘Chance for Trump=’+str(np.round(Chance_don*100,2))+’%’
mev = ‘Mean EV=’+str(np.mean(EVTot))+’+-‘+str(np.std(EVTot))
mp = ‘Most probable EV=’+str(edges[hist==np.max(hist)][0]) # print info into histogram plot

m=max(hist)

plt.text(em,m-m/10,sc)
plt.text(em,m-m/5,st)
plt.text(em,m-3*m/10,mev)
plt.text(em,m-4*m/10,mp)

for i in range(nD):
print (State[i],’EV=’,EV[i],’cmlt EVs D=’,np.sum(EV[:i+1]),’cmlt EVs R=’,538-np.sum(EV[:i+1]),’- D:’,WinState[i]/N*100,’ – R:’,100-WinState[i]/N*100) # state-by-state data

print (‘Run time=’,time()-t1)

May 13 / Peter Hoffmann

Physics, Aging and Immortality

When I published “Life’s Ratchet” 2 years ago, I was focused on how life can create and sustain highly ordered systems in the presence of the surrounding molecular chaos – in particular, how molecular ratchets “extract order from chaos”. To my surprise, the book generated great interest in the area of aging research.  Aging, as Ed Lakatta, chief of the Cardiovascular Science Laboratory at the NIH Institute on Aging puts it, extracts “chaos from order”.

Chaos from Order

Chaos from Order

As part of this surprising interest, I was recently invited to write an article for the literary science magazine Nautilus (which I can only highly recommend). My article on aging appeared yesterday online, with the provocative (not chosen by me) title “Physics makes aging inevitable, not biology”. If you want to know, my title was “Aging: where physics meets biology.” Which is probably more boring, but less provocative.

As it was not possible to cram every thought on the subject into the 2000 words of the article, I will write a few additional thoughts on it here on my blog.

To start with, I looked at the comments to the article. Some recurrent themes in the comments are that (1) we are open, thermodynamic systems and therefore not subject to an increase in entropy (as we can always get more low entropy energy from the environment)  (2) that our cell have repair systems that can take care of any damage that may happen and (3) there are “immortal” cells and organisms, contradicting my claim that aging is inevitable.

(1) and (2) have essentially the same answer:

It is absolutely correct that we are open systems. This is what I describe in detail in “Life’s Ratchet”. The intake of low entropy energy is the reason our molecular machinery can extract order from molecular chaos. However, molecular chaos is always present, and microscopically, molecules in our cells are continuously damaged. Unlike other thermodynamically open, self-organized systems, such as a hurricane, living systems are incredibly tightly controlled systems consisting of complex interlocking feedback and regulation loops. These feedback loops are all reliant on superbly adapted and constructed molecular machines, undamaged DNA to provide the blueprint, and timely and accurate regulation and signaling. These systems interact across a hierarchy of molecules, organelles, cells, cell-cell interaction, tissues, organs and organisms. They have a lot of reserve, redundancy and repair systems built-in.

However, eventually some of these systems become subtly damaged. Energy supply slows down, signaling chains become disrupted, feedback loop timing is a bit off, damaged molecules are not cleared from cells and accumulate, molecular machines loose function or fail to be activated. This loss of function causes additional loss of function in other systems, because of the mutual interdependence of all systems in the organism. This leads to an increasing cascade of failure. The start of this process is simply the question of probability across a vast number of cells and functions. You can try to prevent one system from failing, but there are plenty others that will fail instead. And the system that one has tried to prevent from failing is dependent on the others, so it will not be unaffected in the long run anyhow.

The repair systems in our cells are superb – they allow us to live to 80+ years. We live longer than any mammal of comparable size and heart rate. Could we live even longer? In principle, the repair systems could be improved, but the sheer complexity makes this a prospect that will take many, many years. But we will always be subject to the game of probabilities, which we will lose in the end.

(3) Some readers have pointed out that there are “immortal” organisms. One thing one notices about these immortal organism is that they are all very simple, typically single-celled or at least highly undifferentiated. Examples are bacteria, but also creatures like the so-called “immortal jellyfish”. The immortal jellyfish goes through a stage where it reverses its developmental process, i.e. it reverts from an adult back to a larval stage, which then can develop into new adults. This can, it seems, go on indefinitely, rendering the jellyfish “immortal”. This, at first sight, seems amazing. However, in some sense, humans do the same thing! Our germ lines are also “immortal”. But this is different from aging of an adult complex individual. Compared to maintaining molecular and systemic order in a complex organism over many years, keeping the DNA in a human egg reasonably stable is a relatively easy task. Nevertheless, even there, there is degradation over time. This is the main reason that we reproduce generally at a young age and that birth defects become more prevalent for aging mothers and fathers. As for the jellyfish, as an adult individual jellyfish, they are clearly not immortal, as they have to “die” to revert back to a larval stage. Also, most likely not all jellyfish will make this conversion successfully, so “immortality” is on the population level, not on the individual level. But if that is the definition of “immortality”, them humans are immortal as well. But we would not usually use that definition!

As an important aside, regular (somatic) human cells can also become immortal. This is called cancer. Cancer and aging are two sides of the same molecular disorder coin. If our cells would not die at some point, the molecular disorder and DNA damage would continuously increase the probability for cells to go rogue and turn cancerous. The cost of keeping our cells in line is tight regulation of cell division, growth and differentiation. The cost of this tight regulation, faced with the onslaught of thermal and chemical damage, is aging.

Jun 15 / Peter Hoffmann

Bad teaching advice from “Russian Dictator”

Today’s Chronicle article on Carl Wieman’s ideas to replace student evaluations of teaching with “teaching inventories” of actually used teaching methods has certainly struck a chord. Reading the comments, I was struck by the response of one reader, calling himself “Russian dictator”, which expresses a view that I think is still quite prevalent among some of our faculty. I have to admit I used to occasionally harbor such ideas myself.  However, I don’t anymore – and so I resorted to the unusual step of actually responding. Here, first, is the “Russian dictator”:

“”The research literature, Mr. Wieman said, suggests the most effective teaching method for science, technology, engineering, and mathematics courses — the STEM fields — is “active” learning, in which students engage in problem-solving activities during class time.”

In Russia, where STEM education has traditionally been at a high level, this active learning has been achieved by assigning students an extensive and meaningful homework. Class time wasn’t used for tutoring. My understanding of the US educational system 50 years ago is that students were likewise treated seriously back then.

15 years ago Mr. Wieman’s research in cold atoms was listened to by the whole community of physicists. Too bad he decided to become irrelevant by devoting his efforts to charlatanry.”

This is my response. I welcome any comments you may have.

It is not “charlatanry”. The efficacy of many of these methods has been shown over and over. They are reams of validated studies. (Note added: see for example Hake 1998 or Wieman 2011)

There are huge differences between the students that are OK with the teaching model you describe and the students we have in 21st century America.
1) The students you talk about were highly selected, prepared and able to dedicate their full time to their studies. Even so, I wager that less than 50% finished. At the European university I attended, completion rates for engineering, physics, chemistry etc. was in the 30% range. Intro courses were weeder courses.
2) The numbers of students going into STEM were higher. From my graduating high school class, we had about 10 going into physics (only 2 finished). In the US today, you are hard-pressed finding one student out of 10 high schools going into physics.
3) University was free or much cheaper. Our students have to work. Many are non-traditional and have heavy family obligations. Many commute and therefore have little opportunity to discuss homework with fellow students.
4) Most of us have been OK with the model you describe. We are the survivors of “natural selection”. Most of us could learn from just about any medium or instructor. However, I wager that most of us also had the leisure and time to devote to their studies. I did. It is a big mistake to assume that the reality of our students is the same.

You can assign all the extensive homework you want, but if students don’t have the time and opportunities of exchange with other students or instructors, they will not be able to do the homework, or at least not truly understand and therefore learn nothing. Worse, under pressure to pass the course and under pressure in their daily lives, they will just find solutions on the internet.

Sep 18 / Peter Hoffmann

A visit by a Nobel prize winner (before I forget…)

Carl Wieman visited us this week to talk to us about better ways to teach science. It was a great visit and before I forget, here are a few points that stick in my mind:

  • State the problem first: In science, we typically proceed by first stating all the principles and concepts and then giving problems. We are going from the totally alien and abstract to the concrete. By doing this, we make sure students cannot absorb what we are talking about and don’t really care either. Then we try to interest them after the fact. Turn the whole thing around. State an interesting problem, observation, research problem, something from the news first. Ask them how they would solve the problem or explain the observation. Collect some answers. Now you got their attention. Then proceed on working our the concepts.
  • Lecturing is overrated: Students can read – what they really need is guided practice to learn the kind of thinking skills we take for granted. This takes careful thought and course design.
  • Just write those dang learning objectives! I know many faculty think of these as some administration ploy to make them do more useless work. Or to satisfy some accreditation agency. But, frankly, how are students supposed to know what you want them to learn (skills and content) without making it explicit? The typical student complaints we hear (and which are well justified) include: “The test was nothing like the homework”, “I didn’t get a study guide”, “It’s not clear what the professor wants” etc. Why not fix this problem? Learning outcomes or objectives are very similar to study guides (what do you want students to be able to do?) and they should guide the entire design of a course, from teaching activities to ongoing assessment of how students learn to exams, homework etc.
  • There are two levels of learning objectives in a course: “Big picture” outcomes go on the first page of the syllabus. They are the overall outcomes for the entire course. For example, “students will be able to combine various concepts to solve complex problems in mechanics”. Then there are lesson-level outcomes, such as “students will be able to use Newton’s third law to analyze collisions”. These can be posted on Blackboard for each week or lesson.
  • The right level for changes in teaching are departments. The Dean’s office can beg and cajole, but real change will happen at the department level, supported by the chair. Are we ready for this?
  • Many of Wieman’s ideas and examples can be found here: http://www.cwsei.ubc.ca/
Sep 2 / Peter Hoffmann

Welcome back, faculty, to teaching and (hopefully) learning

The fall semester has begun and we are looking down a 15 week stretch of hope and fear. We are armed with the best intentions and an excitement for our topic. We have reviewed our notes and made them even better. We are going to be brilliant. But as in previous semesters, we harbor these nagging doubts: Are our students going to learn what we want them to learn?

At the beginning of every semester, it is worth reviewing what we really want our students to learn. The following wish list seems quite universal (Fink 2013):
• “Retain information after the course is over”
• “Develop an ability to transfer knowledge to novel situations”
• “Develop skill in thinking or problem solving”
• Gain motivation for further learning and a change in attitude towards learning

But many times these things do not happen – or at least not to the extent we hoped for. How often do we hear the lament: “But they should have learned X in course Y!” We are concerned about low motivation, bad attendance, low time investment and limited intellectual engagement in students. But, at the same time, while willing to tackle these issues, we often do not know how to overcome the obstacles. One thing is clear: A straight up lecture and homework course is not going to solve these problems. And another thing is clear too: While we may wish that our students were different, they are not. And it’s not all their fault if they can’t retain the material they heard in a lecture or read in a textbook. Yes, some come underprepared, distracted by their busy and complicated lives and often less than motivated. Yet, we know that there are good students coming to us and we feel that we should do better by them. So, let’s be honest: How much would we retain from a 45 minute colloquium on the “magnetic properties of manganese-doped lanthanides”? Especially, if we have no clue what that is to begin with. That’s how our students feel.

Speaking for myself, I used to just lecture and give homework. Then I started to think about the outcomes and spent some time (but not nearly enough) reading the educational literature. And it is just starting to sink in. Many of my colleagues (the current reader excepted, of course) do not read any educational literature. Which is curious: We spend 30-100% of our time teaching, but spend practically no time learning how to do it. If we would do the same with our research, we would be out of funding or publications very quickly.

A good start to tackle our problems with teaching is to consider how students experience their educational journey. According to Fink:
• Students feel they are not “self-directed learners”. “They [are] not confident in their ability to approach a problem and figuring it out on their own.”
• The students feel “they [are] not learning as much as they could or should be”
• Many students feel their college teachers “do not really care much about them or about promoting their learning or interacting with them.”
• As a result, students do not “fully or energetically” engage in learning.

What to do? A good start is to take the writing of learning objectives seriously. If we want students to learn how to apply what they learn to “novel situations”, then we should state that explicitly! Why? Because it defines a clear goal that should be reflected in the design of the class, the assignments we give, the feedback we provide, the feedback we solicit from the students and the assessments by which we measure learning.

As an example, can “applying knowledge to novel situations” really be taught by lecturing and giving homework? Think about it! Typically, students hear about a topic for the first time in a lecture, we give them homework or a paper to write, rush through discussing it, and it’s off to the next thing. No coaching or structured exercise, no case study or context-rich problem-solving, no working in teams etc. By contrast, we work completely differently with our research students. We coach them, we have group meetings, we adjust our teaching to meet their needs. And these students are already pre-selected. They already “know”. In our courses, which are full of novice students who do not know, we expect that higher skills, like thinking, transferring knowledge, or developing a thirst for more learning will “just happen”, as long as we assign enough homework.

It is true that the traditional approach seemed to have worked for us, but, then again, we were rather atypical students. Otherwise we wouldn’t be here as faculty. Most of us came into college motivated and with a thirst for learning. We had few distractions and studied at a residential college, surrounded by an intellectual culture many of our students don’t experience. We formed our own study groups and had long discussions with fellow students. We “survived” bad instruction, because of these things. We are the result of natural selection, a survival of the fittest. And, yes, we are stronger for it. But should we expect all our students to be the same? I don’t think we can afford to. Our students don’t come with the same backgrounds. They don’t always come for the intellectual adventure. Not initially. Hopefully that will change because of our efforts, not despite of them. Mostly, our students come because they see college as a path to a better life. Not everybody is a scholar, most of them are here to get a degree. And they pay a lot for it – in money, time and effort. They are not lazy, just overwhelmed and underprepared. Yet, we need them to value writing, science, philosophy and history. Once they are here, we owe them our best shot to teach them the thinking skills and love of learning they need. It’s not going to be asy, but it’s worth thinking about it.

Let me know what you think… Let’s talk about it.

To get started doing that I invite all faculty to this fall’s workshops in the Office of Teaching and Learning, where we will tackle these issues, starting with a talk by Nobel prize winner Carl Wieman on September 16 (http://www.otl.wayne.edu/carl_wieman.php) and a number of workshops and discussions that address these problems head-on, such as “Implementing Learning Objectives Through Classroom Work” on September 19 (where we will have a discussion on how we can change what we do in the classroom to teach students those higher level skills) and “Evidence-Based Teaching Methods” on September 23. See http://www.otl.wayne.edu/pdf/calendars/september_workshops_2014.pdf for the full list.

Maybe some informal groups will form that continue the discussions beyond these workshops, just like we formed our own groups in college tackling fun things like “conformal mapping” or Kant’s “Prolegomena”.

Reference: L. Dee Fink, “Creating significant learning experiences”, John Wiley/ Jossey-Bass. 2013.

Jun 29 / Peter Hoffmann

“The two cultures” –Part 2

In my last entry on a re-evaluation of C. P. Snow’s “The two cultures”, I finished half-sentence (as pointed out by a colleague), with the question about how scientists and humanists can understand each other, when even scientists from different backgrounds speak quite different languages. To start with, I would agree with Snow that the “scientific culture really is a culture, not only in the intellectual but also in the anthropological sense”. As I stated in my last blog entry on this topic, Snow agrees “[scientists] need not, and of course do not, always completely understand each other; biologists more often than not will have a pretty hazy idea of contemporary physics” and, I may add, most physicists have no better idea of biology either, but “there are common attitudes, common standards and patterns of behavior, common approaches and assumptions.” Most interestingly, “this [commonality] goes surprisingly wide and deep. It goes across other mental patterns, such as those of religion or politics or class.”

This is of course what makes science such a great endeavor to be involved in. It doesn’t matter if the person you talk to is rich, poor, old, young, conservative, liberal and, most importantly, black or white, male or female – as long as they speak “science”, they are a scientist. The other stuff is irrelevant. At least that’s the ideal.

The humanities, being closely concerned about human life and culture, don’t quite have the luxury to be so “blind” to these differences. There is by definition a difference between conservative political philosophy or literature, and liberal philosophy and literature, or feminist literature versus “chauvinist” literature etc. There is no “conservative, “liberal” or “chauvinist” physics. The last time, people tried to tie somebody’s physics with his/her religion or cultural background, the premise was utterly stupid, outright embarrassing and unfortunately incredibly destructive (I am talking about the charge that Einstein’s relativity, as well as quantum mechanics, were “Jewish physics”).

What are the attributes of the “culture of science”? Snow talks about an “exacting” culture. An interesting word with connotations of “exact” as well as rigorous, and no-nonsense. Indeed, in their role as scientists (not as private individuals), scientists value clear definitions, statistical over anecdotal evidence, quantitative arguments, consistent theories, and the primacy of empirical evidence and reason over emotions.

The humanities, of course, also form a culture, but a culture that is quite different from the culture of the natural sciences. And that is not a bad thing – the culture of science serves scientists well, while other intellectuals are better served by their own culture.  That is not to say that non-scientists should not understand science’s culture or the findings of science, which are universal. Far from it. It just means that there are other ways to think about and approach situations, particularly situations that do not lend themselves to quantitative analysis. As somebody who has not been acculturated as a literary or humanist intellectual, it is somewhat preposterous of me to try to discern the literary/ humanist intellectual culture from the outside. There is a danger of preconceived notions. But it is fair to assume that humanists and literary intellectuals are first and foremost concerned about humans: What humans think, feel, how they interact, what societies they form, and how those societies interact with each other. They are observers of the non-tangible, complex, emotional interactions of human beings, using anecdotal evidence as stand-ins for the universal condition. Of course, there is nothing wrong with studying love or hate with scientific means, but ultimately such an approach doesn’t quite reach the core of the thing. It is not really possible to fully describe emotional states of human beings without resorting to evoking these same emotions. Once can capture interesting new aspects using science, but really get to the core of what it “feels” to be angry or happy. This is where literature has a clear edge over science.

Where then do these two worlds intersect, and is there any reason why they should? They intersect, because science, too, is a human endeavor, and because emotions, societies and human actions are connected to the physical reality of the humans themselves and their surroundings. And because the society we live in is, for better or worse, shaped by science and technology. Without understanding the shaper, how can we understand the shaped? Literary/humanist intellectuals should know about science and scientists should know about the humanities. Snow puts it like this: “Once or twice I have been provoked and have asked the company [of literary intellectuals] how many of them could describe the Second Law of Thermodynamics. The response was cold: it was also negative. Yet I was asking about the scientific equivalent of: Have you read a work of Shakespeare?

It would indeed be interesting to see who would win this particular contest: Would there be more scientists or engineers who never read any Shakespeare (or at least saw a play), or more literary intellectuals who had never heard of the Second Law of Thermodynamics? I don’t know, but I know it would be a shame either way.

Can one be poetic about the Second Law of Thermodynamics, the literary person may ask. Yes one can:

“The Moving Finger writes; and, having writ,
Moves on: nor all thy Piety nor Wit
Shall lure it back to cancel half a Line,
Nor all thy Tears wash out a Word of it.”
—OMAR KAYYAM, 1048–1131

Jun 29 / Peter Hoffmann

The thermodynamics of life

Superficially, there seems to be a contradiction between the emergence of life and the second law of thermodynamics – the law that colloquially is translated as “everything tends towards disorder”. In reality, however, if this were so, everything around us would be some kind of “mush” with no structure whatsoever. Since we do not observe this (and couldn’t anyways, as we wouldn’t exist), something seems to be wrong with the picture that everything tends towards disorder everywhere and all the time. As I point out in my book, “Life’s Ratchet”, there are many, many processes, even excluding life for the moment, where structure emerges spontaneously, and that do not violate the second law whatsoever. A first step to understanding this seeming paradox is a proper formulation of the second law. It’s not simply, “everything tends towards disorder”, but it is rather that the global distribution of energy tends towards more uniformity over time. How uniformly energy is distributed is measured by the concept of entropy. High entropy means that energy is more evenly distributed, while low entropy is associated with energy that’s concentrated. Concentrated energy can do work, uniformly distributed energy cannot. Thus if all energy were uniformly distributed (and entropy at a maximum), no changes could happen anymore. In entropy terms, the second law is sometimes stated as “all processes tend to increase the entropy of the universe”.

This sounds bleak, but the second law does not say that everywhere things become less structured or that structure cannot emerge spontaneously, or even that an increase in entropy necessarily means a decrease in order or structure. There are counterexamples for all of these scenarios. For example, hard sphere (or marbles) have higher entropy when they are more ordered than if they are randomly stacked. The crystallization of snowflakes increases global entropy (by releasing heat), but creates beautiful order. And so on.

As I argue in my book, new structures, complexity and life exist not despite the second law of thermodynamics, but because of it. Why would that be so? For once, the second law determines a “direction” of energy flow – from concentrated to uniformly distributed. Why would that be important? Let’s imagine that the second law were wrong. Then your cold cup of coffee could, for example, spontaneously extract heat from the uniformly heated heat reservoir of the kitchen around it, and suddenly become hot. Why would that be bad? It would be bad, because chemical reactions could not acquire any directionality. Even if an end product would have lower energy and higher entropy, it could change back into its higher energy, lower entropy predecessor at any time, spontaneously. You couldn’t build any metabolism on something like that! The drive towards higher entropy imposes direction on dynamical processes.

Another, more subtle reason has recently been pointed out by a professor of biophysics at MIT, Jeremy England, who wrote a very interesting paper, titled “Statistical Physics of Self-Replication”, published in the Journal of Chemical Physics in August 2013. The paper was featured in Scientific American, because it is an important step forward to applying thermodynamics and the second law to complex, non-equilibrium situations, such as living objects. Using previous results by other researchers, he first shows that a usual statement of the second law must be amended for non-equilibrium systems, such as living objects. The usual statement of the second law states that the change in entropy is equal or larger than the transferred heat divided by the temperature. For example, if I heat up some water, I increase its entropy, because I supply heat to it, which makes its molecules move faster scrambling the energy of the system.

In England’s paper, he shows that if a system goes through a transformation (such as a bacteria dividing or a DNA replicating), then the change in entropy is larger than just the transfer of heat divided by temperature. As a matter of fact, the more irreversible the transformation is, the larger the minimum change in entropy. This does not seem too surprising, but has apparently never been formulated in a rigorous way and applied to completely non-equilibrium, irreversible transformations as in this paper. What does this finding mean for life? First of all, it means that living beings, doing irreversible sort of things all the time (growing, reproducing, writing blogs) increase the entropy of the universe faster than objects that stick close to equilibrium and that don’t do anything particularly interesting. In other words, living beings are excellent entropy producers.

But, more concretely, England shows, applying this finding to a replicating molecules or organism, one finds, as he states, that “basic thermodynamic constraints derived from exact considerations in statistical physics tell us that a self-replicator’s maximum potential fitness is set by how effectively it exploits sources of energy in its environment to catalyze its own reproduction. Thus, the empirical, biological fact that reproductive fitness is intimately linked to efficient metabolism now has a clear and simple basis in physics.” In other words, the most successful organisms are the ones that use energy most effectively (i.e. dissipate energy quickly), and, coupled with the first finding, these are, according to England, either complex organisms that can initiate highly irreversible changes, or, what would also work, more simple systems, which are in addition very “prone to spontaneous degradation”. An example of the latter is the comparison of RNA versus DNA. RNA is more fragile, but also more nimble in being changes and replicated. As a result, RNA is more likely to be part of the origin of life than DNA is.

At one point, however, I must disagree a little. he states that “self-replicators can increase their maximum potential growth rates not only by more effectively fueling their growth via Delta-q [the expelled heat], but also by lowering the cost of their growth via δ [rate of degradation] and Delta-sint [increase in internal entropy]. The less durable or the less organized a self-replicator is, all thing being equal, the less metabolic energy it must harvest at minimum in order to achieve a certain growth rate.” However, does a smaller changes in internal entropy (entropy of the organism) really imply a “less organized” replicator. After all, as I point out in my book, the molecular processes in our cells are not so far from equilibrium. As a matter of fact they are as close as possible to equilibrium as to increase efficiency. This means that living beings may actually minimize internal entropy changes to what is necessary. Even though they are complex, they seek to maximize Delta-q (harvest energy efficiently), but keep Delta-sint small (don’t change internal entropy by more than is necessary).

In a next step, England applies his ideas to the replication of a bacteria. Taking a single bacteria in a nutrient broth as a system that only exchanges heat with the surrounding, he considers a transformation from a state where there is only one bacteria (state I) to a state where there are two (state II). It immediately becomes clear that – for all practical purposes – only progress from I to II is likely. It is highly unlikely that two bacteria merge and become one again without any resulting mess (like if one bacteria eats the other). England: “Of course, cells are never observed to run their myriad biochemical reactions backwards, any more so than ice cubes are seen forming spontaneously in pots of boiling water.” An interesting statement. I don’t think the two are completely equivalent, at least not on the macroscopic level. In both cases, we have a local reduction in entropy and a reversal of reactions that tend to proceed preferably in one direction only, but one of them is observed to happen (replication of bacteria), while the other isn’t (ice cube forming in hot water). What’s the difference? The difference is that the ice cube has no source of low entropy energy to allow its formation. It would need to take uniformly distributed energy and separate it again. The bacteria on the other hand is taking in highly organized chemical energy and harvests this energy to create order while expelling heat. What this shows is that ultimately the second law directs the creation of a second bacteria, which represents a huge increase in local order, complexity and organization – but at the expense of global “order”, by using up highly concentrated chemical energy and releasing most of it as heat. And that this creation of order has the same origin, thermodynamically speaking, as the melting of an ice cube in hot water, which is a destruction of order.

Finally, considering the various factors going into making a new bacteria, England is able to estimate its heat production. From this, he concludes that “it is remarkable that in a single environment, the organism can convert chemical energy into a new copy of itself so efficiently that if it were to produce even a quarter as much heat it would be pushing the limits of what is thermodynamically possible!” Smart bugs indeed…

Jun 18 / Peter Hoffmann

“The two cultures” –Part 1

Last weekend, on a trip to Maryland, we spent an hour in one of our favorite used book stores, “Courtyard Redux” (don’t ask about the name), in Havre de Grace, MD. If you’re in the market for books on history (especially Civil War), this is the place to go. Of course, I was headed instead for the smaller, but well stocked science section. And I made a few finds, among them a small, blue, clothbound 1st edition (I think) of C.P. Snow’s “The two cultures and the scientific  revolution”. I had heard so much about this book over the years, read little excerpts here and there, but never read the whole thing. I was intrigued. Is what he wrote in the late 1950s still relevant?

But first, who was C.P. Snow? And what are the “two cultures”?

Charles Percy Snow (yes, with these first names, he was clearly English) was a British writer and scientist. He studied chemistry and received a PhD in physics. He was one of a rare breed of “hard-nosed” scientists who also wrote novels and essays, and, as such, definitely a role model. On May 7, 1959, Snow gave his now famous lecture on the “Two cultures”, with which he meant the culture of scientists (and, especially, physical and applied scientists) and the culture of literary “intellectuals”. Why did he see these two cultures as distinct? How have his observations held up over time? Are we still divided into two (or more cultures)?

In C.P. Snow’s observation there was an “ocean” between the scientists and their literary colleagues:

I felt I was moving among two groups – comparable in intelligence, identical in race, not grossly different in social origin, earning about the same incomes, who had almost ceased to communicate at all, who in intellectual, moral and psychological climate had so little in common that instead of going from Burlington House or South Kensington to Chelsea, on might have crossed an ocean.” (There are a lot of modifiers in this small excerpt that I will skip over for now, but come back to later…)

Snow felt this to be a serious threat to the West in his days of a beginning cold war. “I believe the intellectual life of the whole of western society is increasingly being split into two polar groups. … at one pole we have the literary intellectuals, who incidentally while no one was looking took to referring to themselves as ‘intellectuals’ as though there were no others. … at the other [pole], scientists, and as the most representative, physical scientists. Between the two a gulf of mutual incomprehension – sometimes (particularly among the young) hostility and dislike, but most of all lack of understanding.

Is it still like this? First off, I believe that on the whole, the answer is no. Yes, there is still incomprehension and (some) hostility, but overall there is more goodwill. The reasons are many: More scientists want to be writers. And to be a good writer, one has to read and engage literature and language. There is no way around it. At the same time, there is much good science writing and information about science, so “literary intellectuals” are more likely to be exposed and, hopefully, intrigued by what they see and read. These days it is difficult to avoid the Higgs Boson or the Mars Rover, even if one tries very hard. They are part of our culture.

Another major reason that “hostilities” have eased is that we all face a common “enemy”: The public disinvestment in education, learning and research, and the anti-intellectualism of our ruling elites. It would be foolish of scientists to gloat at the disinvestment in the humanities, when they can clearly see that we will be next, or, rather, already are. It is tough not to paraphrase Niemöller:  : “First they came for the humanities…”

Having said that, there is still much work to be done to improve mutual understanding and trust. This has not gotten any easier, especially with the explosion of knowledge. Even among scientists, there are gulfs of incomprehension. Here is a personal anecdote: I collaborate with a cancer biologist at Wayne State Medical School. We work well together, but it took a while to bridge the vast gap between our fields, to understand our approaches, to come up with a plan of research we can both understand. I was proud of myself for beginning to understand “medical science”, as I was proud of my colleague for having so much patience with me.  I saw my collaborator’s work as very applied in the area of cancer medicine, while I thought of myself as a basic scientist (although my particle physics colleagues would see me as an applied physicist). Then, about one month ago, I was contacted by an MD from the Karmanos Cancer Institute about possible collaborations. In our conversation, she mentioned our mutual friend, the cancer biologist, and casually mentioned how “basic” and “fundamental” his work is, and how hard they have to work on understanding each other and moving towards applications! Apparently, from cancer biology to clinical cancer care is as big a gulf as from physics to cancer biology. Science is a vast ocean indeed. And now add the humanities… (in the next installment of this series).

Jun 8 / Peter Hoffmann

The very small and the very big.

In my lab we measure what water confined in a nanometer-sized gap does when you slowly squeeze it more. Why would you want to do that, you may ask – well, let’s talk about that later. Suffice to say, we have our reasons. What I want to talk about here, instead, is how something that’s almost common place in our lab (it’s just a number on the axis of a graph), is truly astonishing once you think about. What I’m talking about is the tiny scale at which scientists (including my students) are able to perform precise measurements. What’s on the x-axis of our measurements are nanometers and, sometimes, Ångstrøms (which is a tenth of a nanometer). Within these few nanometers we are able to measure variations of forces exerted by layers of water just a single water molecule thick.

At 0.25 nanometer, a water molecule seems to be pretty darn small. How small? In other words, exactly what is a nanometer? You probably know it’s a billionth of a meter, or 10-9m=0.000000001 m, but that is of little help. How can we visualize such a small scale? Here is one way: Not to split hair, but if you actually were to split hair, you would need to split a hair to 100,000th of its width to make strands a nanometer thick. Here is another way to try to visualize how tiny a nanometer is: A healthy, young human can walk about 5 km in an hour (at a brisk pace). This is 2,500 times a rather tall human’s height. Proportionally, if we were to shrink this human to 1 nanometer height, the shrunken human should be able to walk 2,500 her height or 2,500 nanometers in one hour. How long would it take our nanometer sized human to walk a distance equivalent to the height of a normal sized human? Or, in other words, if we were only 1 nm tall, what kind of distance could we explore just walking around? Take a guess before reading on…. Here it is: Our rather tall, full-sized human has a height of 2 m (or about 6’ 7” in feet & inches) = 2 x 109 nm = 2,000,000,000 nanometers. It would take 2×109/2,500 = 8×105hours = 91 years (!!) for our nanometer-sized human to traverse the length of the full-sized human. And that’s walking 24/7 at a brisk pace for 91 years –a (long) lifetime. This means if we were only a nanometer tall and were walking day and night for 90 years, all we could explore would be a few feet of territory. We are unimaginably huge compared to a nanometer. If our the molecules in our body would move like we do, a molecule would need 90 years to get from our toes to our ears. Fortunately, molecules move much, much faster than that, otherwise we wouldn’t notice that somebody stepped on our toes until decades later!

If we are a billion times larger than a nanometer, what is a billion times larger than we are? The distance to nearest star? The distance to the Andromeda galaxy? Make a guess… Ok, let’s calculate it. At a height of 2 m for our model human, the distance we are looking for is 2 x 109 m = 2×106 km = 1.25 million miles. That’s five times the distance of the Earth to the Moon. That’s pretty far – although not as far as we may have thought. The universe is a huge place. Five times the distance to the moon is certainly something we can think about traversing – as a matter of fact, some Apollo astronauts must have come pretty close to this distance, if we include orbiting the moon. But they were going much faster than just a brisk walk.

So it seems that these are distances our human lives have reached: One billion times smaller and one billion times larger than we are. From the molecules we are made of to the largest distance we have travelled.

Jan 3 / Peter Hoffmann

What’s this all about?

My blog will be concerned with science and with education.

In science, humans are in the middle of our world, because this is our starting point and we reach from there in all directions: past and future, large and small, short and long time scales. In my research, I explore things at the nanoscale, but other scientists look at things much smaller, or much, much bigger things.  As scientists we are in the middle of a world that is getting smaller and bigger, faster and slower all the time. This is also a place where science and the humanities meet. We look at an alien world of the incredibly tiny, the stupendously large, the super-fast and the unimaginably slow from the vantage point of a human being, barely able to define his or her place in this giant cosmic theater. We are in the middle of exploring our incomprehensible world, and we are just scratching the surface.

I also see myself in the middle when it comes to education:

Currently, I find myself in the middle between being faculty member and an administrator. I am still teaching and doing research, but I am also part of the “dark side” of administration (which I really did not find as dark as people may think).

Higher education is undergoing some changes, which are either too rapid or too slow, exciting or  old hat, beneficial or harmful  – depending on one’s point of view . Whatever one thinks about these changes, from online learning to more government scrutiny, they are already happening and, for the foreseeable future, are here to stay and  most likely to intensify. I want to share and hear ideas how we can make education the best it could possibly be. How we cannot only “manage” the changes we face, but possibly turn them into opportunities…