Skip to content

Wayne State University

Aim Higher

Sep 18 / Peter Hoffmann

A visit by a Nobel prize winner (before I forget…)

Carl Wieman visited us this week to talk to us about better ways to teach science. It was a great visit and before I forget, here are a few points that stick in my mind:

  • State the problem first: In science, we typically proceed by first stating all the principles and concepts and then giving problems. We are going from the totally alien and abstract to the concrete. By doing this, we make sure students cannot absorb what we are talking about and don’t really care either. Then we try to interest them after the fact. Turn the whole thing around. State an interesting problem, observation, research problem, something from the news first. Ask them how they would solve the problem or explain the observation. Collect some answers. Now you got their attention. Then proceed on working our the concepts.
  • Lecturing is overrated: Students can read – what they really need is guided practice to learn the kind of thinking skills we take for granted. This takes careful thought and course design.
  • Just write those dang learning objectives! I know many faculty think of these as some administration ploy to make them do more useless work. Or to satisfy some accreditation agency. But, frankly, how are students supposed to know what you want them to learn (skills and content) without making it explicit? The typical student complaints we hear (and which are well justified) include: “The test was nothing like the homework”, “I didn’t get a study guide”, “It’s not clear what the professor wants” etc. Why not fix this problem? Learning outcomes or objectives are very similar to study guides (what do you want students to be able to do?) and they should guide the entire design of a course, from teaching activities to ongoing assessment of how students learn to exams, homework etc.
  • There are two levels of learning objectives in a course: “Big picture” outcomes go on the first page of the syllabus. They are the overall outcomes for the entire course. For example, “students will be able to combine various concepts to solve complex problems in mechanics”. Then there are lesson-level outcomes, such as “students will be able to use Newton’s third law to analyze collisions”. These can be posted on Blackboard for each week or lesson.
  • The right level for changes in teaching are departments. The Dean’s office can beg and cajole, but real change will happen at the department level, supported by the chair. Are we ready for this?
  • Many of Wieman’s ideas and examples can be found here: http://www.cwsei.ubc.ca/
Sep 2 / Peter Hoffmann

Welcome back, faculty, to teaching and (hopefully) learning

The fall semester has begun and we are looking down a 15 week stretch of hope and fear. We are armed with the best intentions and an excitement for our topic. We have reviewed our notes and made them even better. We are going to be brilliant. But as in previous semesters, we harbor these nagging doubts: Are our students going to learn what we want them to learn?

At the beginning of every semester, it is worth reviewing what we really want our students to learn. The following wish list seems quite universal (Fink 2013):
• “Retain information after the course is over”
• “Develop an ability to transfer knowledge to novel situations”
• “Develop skill in thinking or problem solving”
• Gain motivation for further learning and a change in attitude towards learning

But many times these things do not happen – or at least not to the extent we hoped for. How often do we hear the lament: “But they should have learned X in course Y!” We are concerned about low motivation, bad attendance, low time investment and limited intellectual engagement in students. But, at the same time, while willing to tackle these issues, we often do not know how to overcome the obstacles. One thing is clear: A straight up lecture and homework course is not going to solve these problems. And another thing is clear too: While we may wish that our students were different, they are not. And it’s not all their fault if they can’t retain the material they heard in a lecture or read in a textbook. Yes, some come underprepared, distracted by their busy and complicated lives and often less than motivated. Yet, we know that there are good students coming to us and we feel that we should do better by them. So, let’s be honest: How much would we retain from a 45 minute colloquium on the “magnetic properties of manganese-doped lanthanides”? Especially, if we have no clue what that is to begin with. That’s how our students feel.

Speaking for myself, I used to just lecture and give homework. Then I started to think about the outcomes and spent some time (but not nearly enough) reading the educational literature. And it is just starting to sink in. Many of my colleagues (the current reader excepted, of course) do not read any educational literature. Which is curious: We spend 30-100% of our time teaching, but spend practically no time learning how to do it. If we would do the same with our research, we would be out of funding or publications very quickly.

A good start to tackle our problems with teaching is to consider how students experience their educational journey. According to Fink:
• Students feel they are not “self-directed learners”. “They [are] not confident in their ability to approach a problem and figuring it out on their own.”
• The students feel “they [are] not learning as much as they could or should be”
• Many students feel their college teachers “do not really care much about them or about promoting their learning or interacting with them.”
• As a result, students do not “fully or energetically” engage in learning.

What to do? A good start is to take the writing of learning objectives seriously. If we want students to learn how to apply what they learn to “novel situations”, then we should state that explicitly! Why? Because it defines a clear goal that should be reflected in the design of the class, the assignments we give, the feedback we provide, the feedback we solicit from the students and the assessments by which we measure learning.

As an example, can “applying knowledge to novel situations” really be taught by lecturing and giving homework? Think about it! Typically, students hear about a topic for the first time in a lecture, we give them homework or a paper to write, rush through discussing it, and it’s off to the next thing. No coaching or structured exercise, no case study or context-rich problem-solving, no working in teams etc. By contrast, we work completely differently with our research students. We coach them, we have group meetings, we adjust our teaching to meet their needs. And these students are already pre-selected. They already “know”. In our courses, which are full of novice students who do not know, we expect that higher skills, like thinking, transferring knowledge, or developing a thirst for more learning will “just happen”, as long as we assign enough homework.

It is true that the traditional approach seemed to have worked for us, but, then again, we were rather atypical students. Otherwise we wouldn’t be here as faculty. Most of us came into college motivated and with a thirst for learning. We had few distractions and studied at a residential college, surrounded by an intellectual culture many of our students don’t experience. We formed our own study groups and had long discussions with fellow students. We “survived” bad instruction, because of these things. We are the result of natural selection, a survival of the fittest. And, yes, we are stronger for it. But should we expect all our students to be the same? I don’t think we can afford to. Our students don’t come with the same backgrounds. They don’t always come for the intellectual adventure. Not initially. Hopefully that will change because of our efforts, not despite of them. Mostly, our students come because they see college as a path to a better life. Not everybody is a scholar, most of them are here to get a degree. And they pay a lot for it – in money, time and effort. They are not lazy, just overwhelmed and underprepared. Yet, we need them to value writing, science, philosophy and history. Once they are here, we owe them our best shot to teach them the thinking skills and love of learning they need. It’s not going to be asy, but it’s worth thinking about it.

Let me know what you think… Let’s talk about it.

To get started doing that I invite all faculty to this fall’s workshops in the Office of Teaching and Learning, where we will tackle these issues, starting with a talk by Nobel prize winner Carl Wieman on September 16 (http://www.otl.wayne.edu/carl_wieman.php) and a number of workshops and discussions that address these problems head-on, such as “Implementing Learning Objectives Through Classroom Work” on September 19 (where we will have a discussion on how we can change what we do in the classroom to teach students those higher level skills) and “Evidence-Based Teaching Methods” on September 23. See http://www.otl.wayne.edu/pdf/calendars/september_workshops_2014.pdf for the full list.

Maybe some informal groups will form that continue the discussions beyond these workshops, just like we formed our own groups in college tackling fun things like “conformal mapping” or Kant’s “Prolegomena”.

Reference: L. Dee Fink, “Creating significant learning experiences”, John Wiley/ Jossey-Bass. 2013.

Jun 29 / Peter Hoffmann

“The two cultures” –Part 2

In my last entry on a re-evaluation of C. P. Snow’s “The two cultures”, I finished half-sentence (as pointed out by a colleague), with the question about how scientists and humanists can understand each other, when even scientists from different backgrounds speak quite different languages. To start with, I would agree with Snow that the “scientific culture really is a culture, not only in the intellectual but also in the anthropological sense”. As I stated in my last blog entry on this topic, Snow agrees “[scientists] need not, and of course do not, always completely understand each other; biologists more often than not will have a pretty hazy idea of contemporary physics” and, I may add, most physicists have no better idea of biology either, but “there are common attitudes, common standards and patterns of behavior, common approaches and assumptions.” Most interestingly, “this [commonality] goes surprisingly wide and deep. It goes across other mental patterns, such as those of religion or politics or class.”

This is of course what makes science such a great endeavor to be involved in. It doesn’t matter if the person you talk to is rich, poor, old, young, conservative, liberal and, most importantly, black or white, male or female – as long as they speak “science”, they are a scientist. The other stuff is irrelevant. At least that’s the ideal.

The humanities, being closely concerned about human life and culture, don’t quite have the luxury to be so “blind” to these differences. There is by definition a difference between conservative political philosophy or literature, and liberal philosophy and literature, or feminist literature versus “chauvinist” literature etc. There is no “conservative, “liberal” or “chauvinist” physics. The last time, people tried to tie somebody’s physics with his/her religion or cultural background, the premise was utterly stupid, outright embarrassing and unfortunately incredibly destructive (I am talking about the charge that Einstein’s relativity, as well as quantum mechanics, were “Jewish physics”).

What are the attributes of the “culture of science”? Snow talks about an “exacting” culture. An interesting word with connotations of “exact” as well as rigorous, and no-nonsense. Indeed, in their role as scientists (not as private individuals), scientists value clear definitions, statistical over anecdotal evidence, quantitative arguments, consistent theories, and the primacy of empirical evidence and reason over emotions.

The humanities, of course, also form a culture, but a culture that is quite different from the culture of the natural sciences. And that is not a bad thing – the culture of science serves scientists well, while other intellectuals are better served by their own culture.  That is not to say that non-scientists should not understand science’s culture or the findings of science, which are universal. Far from it. It just means that there are other ways to think about and approach situations, particularly situations that do not lend themselves to quantitative analysis. As somebody who has not been acculturated as a literary or humanist intellectual, it is somewhat preposterous of me to try to discern the literary/ humanist intellectual culture from the outside. There is a danger of preconceived notions. But it is fair to assume that humanists and literary intellectuals are first and foremost concerned about humans: What humans think, feel, how they interact, what societies they form, and how those societies interact with each other. They are observers of the non-tangible, complex, emotional interactions of human beings, using anecdotal evidence as stand-ins for the universal condition. Of course, there is nothing wrong with studying love or hate with scientific means, but ultimately such an approach doesn’t quite reach the core of the thing. It is not really possible to fully describe emotional states of human beings without resorting to evoking these same emotions. Once can capture interesting new aspects using science, but really get to the core of what it “feels” to be angry or happy. This is where literature has a clear edge over science.

Where then do these two worlds intersect, and is there any reason why they should? They intersect, because science, too, is a human endeavor, and because emotions, societies and human actions are connected to the physical reality of the humans themselves and their surroundings. And because the society we live in is, for better or worse, shaped by science and technology. Without understanding the shaper, how can we understand the shaped? Literary/humanist intellectuals should know about science and scientists should know about the humanities. Snow puts it like this: “Once or twice I have been provoked and have asked the company [of literary intellectuals] how many of them could describe the Second Law of Thermodynamics. The response was cold: it was also negative. Yet I was asking about the scientific equivalent of: Have you read a work of Shakespeare?

It would indeed be interesting to see who would win this particular contest: Would there be more scientists or engineers who never read any Shakespeare (or at least saw a play), or more literary intellectuals who had never heard of the Second Law of Thermodynamics? I don’t know, but I know it would be a shame either way.

Can one be poetic about the Second Law of Thermodynamics, the literary person may ask. Yes one can:

“The Moving Finger writes; and, having writ,
Moves on: nor all thy Piety nor Wit
Shall lure it back to cancel half a Line,
Nor all thy Tears wash out a Word of it.”
—OMAR KAYYAM, 1048–1131

Jun 29 / Peter Hoffmann

The thermodynamics of life

Superficially, there seems to be a contradiction between the emergence of life and the second law of thermodynamics – the law that colloquially is translated as “everything tends towards disorder”. In reality, however, if this were so, everything around us would be some kind of “mush” with no structure whatsoever. Since we do not observe this (and couldn’t anyways, as we wouldn’t exist), something seems to be wrong with the picture that everything tends towards disorder everywhere and all the time. As I point out in my book, “Life’s Ratchet”, there are many, many processes, even excluding life for the moment, where structure emerges spontaneously, and that do not violate the second law whatsoever. A first step to understanding this seeming paradox is a proper formulation of the second law. It’s not simply, “everything tends towards disorder”, but it is rather that the global distribution of energy tends towards more uniformity over time. How uniformly energy is distributed is measured by the concept of entropy. High entropy means that energy is more evenly distributed, while low entropy is associated with energy that’s concentrated. Concentrated energy can do work, uniformly distributed energy cannot. Thus if all energy were uniformly distributed (and entropy at a maximum), no changes could happen anymore. In entropy terms, the second law is sometimes stated as “all processes tend to increase the entropy of the universe”.

This sounds bleak, but the second law does not say that everywhere things become less structured or that structure cannot emerge spontaneously, or even that an increase in entropy necessarily means a decrease in order or structure. There are counterexamples for all of these scenarios. For example, hard sphere (or marbles) have higher entropy when they are more ordered than if they are randomly stacked. The crystallization of snowflakes increases global entropy (by releasing heat), but creates beautiful order. And so on.

As I argue in my book, new structures, complexity and life exist not despite the second law of thermodynamics, but because of it. Why would that be so? For once, the second law determines a “direction” of energy flow – from concentrated to uniformly distributed. Why would that be important? Let’s imagine that the second law were wrong. Then your cold cup of coffee could, for example, spontaneously extract heat from the uniformly heated heat reservoir of the kitchen around it, and suddenly become hot. Why would that be bad? It would be bad, because chemical reactions could not acquire any directionality. Even if an end product would have lower energy and higher entropy, it could change back into its higher energy, lower entropy predecessor at any time, spontaneously. You couldn’t build any metabolism on something like that! The drive towards higher entropy imposes direction on dynamical processes.

Another, more subtle reason has recently been pointed out by a professor of biophysics at MIT, Jeremy England, who wrote a very interesting paper, titled “Statistical Physics of Self-Replication”, published in the Journal of Chemical Physics in August 2013. The paper was featured in Scientific American, because it is an important step forward to applying thermodynamics and the second law to complex, non-equilibrium situations, such as living objects. Using previous results by other researchers, he first shows that a usual statement of the second law must be amended for non-equilibrium systems, such as living objects. The usual statement of the second law states that the change in entropy is equal or larger than the transferred heat divided by the temperature. For example, if I heat up some water, I increase its entropy, because I supply heat to it, which makes its molecules move faster scrambling the energy of the system.

In England’s paper, he shows that if a system goes through a transformation (such as a bacteria dividing or a DNA replicating), then the change in entropy is larger than just the transfer of heat divided by temperature. As a matter of fact, the more irreversible the transformation is, the larger the minimum change in entropy. This does not seem too surprising, but has apparently never been formulated in a rigorous way and applied to completely non-equilibrium, irreversible transformations as in this paper. What does this finding mean for life? First of all, it means that living beings, doing irreversible sort of things all the time (growing, reproducing, writing blogs) increase the entropy of the universe faster than objects that stick close to equilibrium and that don’t do anything particularly interesting. In other words, living beings are excellent entropy producers.

But, more concretely, England shows, applying this finding to a replicating molecules or organism, one finds, as he states, that “basic thermodynamic constraints derived from exact considerations in statistical physics tell us that a self-replicator’s maximum potential fitness is set by how effectively it exploits sources of energy in its environment to catalyze its own reproduction. Thus, the empirical, biological fact that reproductive fitness is intimately linked to efficient metabolism now has a clear and simple basis in physics.” In other words, the most successful organisms are the ones that use energy most effectively (i.e. dissipate energy quickly), and, coupled with the first finding, these are, according to England, either complex organisms that can initiate highly irreversible changes, or, what would also work, more simple systems, which are in addition very “prone to spontaneous degradation”. An example of the latter is the comparison of RNA versus DNA. RNA is more fragile, but also more nimble in being changes and replicated. As a result, RNA is more likely to be part of the origin of life than DNA is.

At one point, however, I must disagree a little. he states that “self-replicators can increase their maximum potential growth rates not only by more effectively fueling their growth via Delta-q [the expelled heat], but also by lowering the cost of their growth via δ [rate of degradation] and Delta-sint [increase in internal entropy]. The less durable or the less organized a self-replicator is, all thing being equal, the less metabolic energy it must harvest at minimum in order to achieve a certain growth rate.” However, does a smaller changes in internal entropy (entropy of the organism) really imply a “less organized” replicator. After all, as I point out in my book, the molecular processes in our cells are not so far from equilibrium. As a matter of fact they are as close as possible to equilibrium as to increase efficiency. This means that living beings may actually minimize internal entropy changes to what is necessary. Even though they are complex, they seek to maximize Delta-q (harvest energy efficiently), but keep Delta-sint small (don’t change internal entropy by more than is necessary).

In a next step, England applies his ideas to the replication of a bacteria. Taking a single bacteria in a nutrient broth as a system that only exchanges heat with the surrounding, he considers a transformation from a state where there is only one bacteria (state I) to a state where there are two (state II). It immediately becomes clear that – for all practical purposes – only progress from I to II is likely. It is highly unlikely that two bacteria merge and become one again without any resulting mess (like if one bacteria eats the other). England: “Of course, cells are never observed to run their myriad biochemical reactions backwards, any more so than ice cubes are seen forming spontaneously in pots of boiling water.” An interesting statement. I don’t think the two are completely equivalent, at least not on the macroscopic level. In both cases, we have a local reduction in entropy and a reversal of reactions that tend to proceed preferably in one direction only, but one of them is observed to happen (replication of bacteria), while the other isn’t (ice cube forming in hot water). What’s the difference? The difference is that the ice cube has no source of low entropy energy to allow its formation. It would need to take uniformly distributed energy and separate it again. The bacteria on the other hand is taking in highly organized chemical energy and harvests this energy to create order while expelling heat. What this shows is that ultimately the second law directs the creation of a second bacteria, which represents a huge increase in local order, complexity and organization – but at the expense of global “order”, by using up highly concentrated chemical energy and releasing most of it as heat. And that this creation of order has the same origin, thermodynamically speaking, as the melting of an ice cube in hot water, which is a destruction of order.

Finally, considering the various factors going into making a new bacteria, England is able to estimate its heat production. From this, he concludes that “it is remarkable that in a single environment, the organism can convert chemical energy into a new copy of itself so efficiently that if it were to produce even a quarter as much heat it would be pushing the limits of what is thermodynamically possible!” Smart bugs indeed…

Jun 18 / Peter Hoffmann

“The two cultures” –Part 1

Last weekend, on a trip to Maryland, we spent an hour in one of our favorite used book stores, “Courtyard Redux” (don’t ask about the name), in Havre de Grace, MD. If you’re in the market for books on history (especially Civil War), this is the place to go. Of course, I was headed instead for the smaller, but well stocked science section. And I made a few finds, among them a small, blue, clothbound 1st edition (I think) of C.P. Snow’s “The two cultures and the scientific  revolution”. I had heard so much about this book over the years, read little excerpts here and there, but never read the whole thing. I was intrigued. Is what he wrote in the late 1950s still relevant?

But first, who was C.P. Snow? And what are the “two cultures”?

Charles Percy Snow (yes, with these first names, he was clearly English) was a British writer and scientist. He studied chemistry and received a PhD in physics. He was one of a rare breed of “hard-nosed” scientists who also wrote novels and essays, and, as such, definitely a role model. On May 7, 1959, Snow gave his now famous lecture on the “Two cultures”, with which he meant the culture of scientists (and, especially, physical and applied scientists) and the culture of literary “intellectuals”. Why did he see these two cultures as distinct? How have his observations held up over time? Are we still divided into two (or more cultures)?

In C.P. Snow’s observation there was an “ocean” between the scientists and their literary colleagues:

I felt I was moving among two groups – comparable in intelligence, identical in race, not grossly different in social origin, earning about the same incomes, who had almost ceased to communicate at all, who in intellectual, moral and psychological climate had so little in common that instead of going from Burlington House or South Kensington to Chelsea, on might have crossed an ocean.” (There are a lot of modifiers in this small excerpt that I will skip over for now, but come back to later…)

Snow felt this to be a serious threat to the West in his days of a beginning cold war. “I believe the intellectual life of the whole of western society is increasingly being split into two polar groups. … at one pole we have the literary intellectuals, who incidentally while no one was looking took to referring to themselves as ‘intellectuals’ as though there were no others. … at the other [pole], scientists, and as the most representative, physical scientists. Between the two a gulf of mutual incomprehension – sometimes (particularly among the young) hostility and dislike, but most of all lack of understanding.

Is it still like this? First off, I believe that on the whole, the answer is no. Yes, there is still incomprehension and (some) hostility, but overall there is more goodwill. The reasons are many: More scientists want to be writers. And to be a good writer, one has to read and engage literature and language. There is no way around it. At the same time, there is much good science writing and information about science, so “literary intellectuals” are more likely to be exposed and, hopefully, intrigued by what they see and read. These days it is difficult to avoid the Higgs Boson or the Mars Rover, even if one tries very hard. They are part of our culture.

Another major reason that “hostilities” have eased is that we all face a common “enemy”: The public disinvestment in education, learning and research, and the anti-intellectualism of our ruling elites. It would be foolish of scientists to gloat at the disinvestment in the humanities, when they can clearly see that we will be next, or, rather, already are. It is tough not to paraphrase Niemöller:  : “First they came for the humanities…”

Having said that, there is still much work to be done to improve mutual understanding and trust. This has not gotten any easier, especially with the explosion of knowledge. Even among scientists, there are gulfs of incomprehension. Here is a personal anecdote: I collaborate with a cancer biologist at Wayne State Medical School. We work well together, but it took a while to bridge the vast gap between our fields, to understand our approaches, to come up with a plan of research we can both understand. I was proud of myself for beginning to understand “medical science”, as I was proud of my colleague for having so much patience with me.  I saw my collaborator’s work as very applied in the area of cancer medicine, while I thought of myself as a basic scientist (although my particle physics colleagues would see me as an applied physicist). Then, about one month ago, I was contacted by an MD from the Karmanos Cancer Institute about possible collaborations. In our conversation, she mentioned our mutual friend, the cancer biologist, and casually mentioned how “basic” and “fundamental” his work is, and how hard they have to work on understanding each other and moving towards applications! Apparently, from cancer biology to clinical cancer care is as big a gulf as from physics to cancer biology. Science is a vast ocean indeed. And now add the humanities… (in the next installment of this series).

Jun 8 / Peter Hoffmann

The very small and the very big.

In my lab we measure what water confined in a nanometer-sized gap does when you slowly squeeze it more. Why would you want to do that, you may ask – well, let’s talk about that later. Suffice to say, we have our reasons. What I want to talk about here, instead, is how something that’s almost common place in our lab (it’s just a number on the axis of a graph), is truly astonishing once you think about. What I’m talking about is the tiny scale at which scientists (including my students) are able to perform precise measurements. What’s on the x-axis of our measurements are nanometers and, sometimes, Ångstrøms (which is a tenth of a nanometer). Within these few nanometers we are able to measure variations of forces exerted by layers of water just a single water molecule thick.

At 0.25 nanometer, a water molecule seems to be pretty darn small. How small? In other words, exactly what is a nanometer? You probably know it’s a billionth of a meter, or 10-9m=0.000000001 m, but that is of little help. How can we visualize such a small scale? Here is one way: Not to split hair, but if you actually were to split hair, you would need to split a hair to 100,000th of its width to make strands a nanometer thick. Here is another way to try to visualize how tiny a nanometer is: A healthy, young human can walk about 5 km in an hour (at a brisk pace). This is 2,500 times a rather tall human’s height. Proportionally, if we were to shrink this human to 1 nanometer height, the shrunken human should be able to walk 2,500 her height or 2,500 nanometers in one hour. How long would it take our nanometer sized human to walk a distance equivalent to the height of a normal sized human? Or, in other words, if we were only 1 nm tall, what kind of distance could we explore just walking around? Take a guess before reading on…. Here it is: Our rather tall, full-sized human has a height of 2 m (or about 6’ 7” in feet & inches) = 2 x 109 nm = 2,000,000,000 nanometers. It would take 2×109/2,500 = 8×105hours = 91 years (!!) for our nanometer-sized human to traverse the length of the full-sized human. And that’s walking 24/7 at a brisk pace for 91 years –a (long) lifetime. This means if we were only a nanometer tall and were walking day and night for 90 years, all we could explore would be a few feet of territory. We are unimaginably huge compared to a nanometer. If our the molecules in our body would move like we do, a molecule would need 90 years to get from our toes to our ears. Fortunately, molecules move much, much faster than that, otherwise we wouldn’t notice that somebody stepped on our toes until decades later!

If we are a billion times larger than a nanometer, what is a billion times larger than we are? The distance to nearest star? The distance to the Andromeda galaxy? Make a guess… Ok, let’s calculate it. At a height of 2 m for our model human, the distance we are looking for is 2 x 109 m = 2×106 km = 1.25 million miles. That’s five times the distance of the Earth to the Moon. That’s pretty far – although not as far as we may have thought. The universe is a huge place. Five times the distance to the moon is certainly something we can think about traversing – as a matter of fact, some Apollo astronauts must have come pretty close to this distance, if we include orbiting the moon. But they were going much faster than just a brisk walk.

So it seems that these are distances our human lives have reached: One billion times smaller and one billion times larger than we are. From the molecules we are made of to the largest distance we have travelled.

Jan 3 / Peter Hoffmann

What’s this all about?

My blog will be concerned with science and with education.

In science, humans are in the middle of our world, because this is our starting point and we reach from there in all directions: past and future, large and small, short and long time scales. In my research, I explore things at the nanoscale, but other scientists look at things much smaller, or much, much bigger things.  As scientists we are in the middle of a world that is getting smaller and bigger, faster and slower all the time. This is also a place where science and the humanities meet. We look at an alien world of the incredibly tiny, the stupendously large, the super-fast and the unimaginably slow from the vantage point of a human being, barely able to define his or her place in this giant cosmic theater. We are in the middle of exploring our incomprehensible world, and we are just scratching the surface.

I also see myself in the middle when it comes to education:

Currently, I find myself in the middle between being faculty member and an administrator. I am still teaching and doing research, but I am also part of the “dark side” of administration (which I really did not find as dark as people may think).

Higher education is undergoing some changes, which are either too rapid or too slow, exciting or  old hat, beneficial or harmful  - depending on one’s point of view . Whatever one thinks about these changes, from online learning to more government scrutiny, they are already happening and, for the foreseeable future, are here to stay and  most likely to intensify. I want to share and hear ideas how we can make education the best it could possibly be. How we cannot only “manage” the changes we face, but possibly turn them into opportunities…

Jan 3 / Peter Hoffmann

Thriving in the classroom

Carl Freeman, a treasured colleague in the biological sciences and intrepid “warrior” for better teaching at Wayne State shared an article he really likes with us here in the Dean’s office: “Thriving in the Classroom” by Laurie Schreiner. I finally got around reading it today, and was happy to find that it expresses many of ideas I have had trouble putting into words.

The main idea of the article is not too surprising to us who have spent some time in the education “business”: Students who are engaged and proactive in their own learning are happier, more successful students. As Schreiner puts it, the difference is between the student who is an “active, self-regulated learner” and the student who “learns passively”, seeing learning “as an activity whose outcome is under someone else’s control“. It is this latter phrase that caught my attention, because it really captures what is going on. The student whose main concern is “what is going to be on the test” has abdicated his control over his own learning to the instructor.

But blaming the student does not get us far. It is true that some students take to higher education as fish do to water, while others never “get it” – but as educators we need to find a way to engage both types of students. Does this mean we have to teach the class in two ways – one way for the engaged and one way for the dis-engaged? I don’t think so. Both students will benefit from an engaged, active learning environment. The disengaged student just does not (yet) know how to be engaged in learning. We can and should think about how we can explicitly teach students to do just that.

Schreiner makes the following suggestions: (1) “Look beyond behavior” – the argument is that the student’s in-class engagement is only the tip of the iceberg. What is more important is their mental engagement with the material – much of which may happen outside the classroom. Just because an overeager student may raise his hand every 2 minutes, does not necessarily mean the student is really mentally engaged. How often have we been disappointed by  the eventual demonstration of understanding of some of these overactive student, and, by contrast, have been impressed by a student who generally seemed very quiet during class. This is not to mean that we don’t want active student engagement in the classroom (quite the opposite), but that there is more beneath the surface.

This is a good argument for frequent “formative assessment”, i.e. quick ways, not linked to a grade, to see if students get the materials and have thought about it. More about this is a future post.

(2) “Engage students intentionally” – Here, Schreiner points to three components that promote “intrinsic motivation”: Competence, Autonomy and Relatedness. According to Schreiner, “This … translates to classroom practices that communicate to students that they are capable of mastering the course material, … have choices in how they might demonstrate that [mastery], and that the instructor cares about them and is supportive of them.”

For me, the take-home message here is that we should not “dumb-down” teaching, but that we should “step up” – i.e. challenge students to learn in an environment that “simulates” real situations as much as possible, have them stretch a little, but be there to help them master material that they themselves may have never thought possible. In other words, let’s try to promote excellence, not by cutting down “unworthy students” through impossible tests, but by giving them challenging tasks, actively engage them, while being there for them, like a coach, to help them when they stumble.

I tried something like this in my most recent (undergraduate) computational physics class, where I coached students to learn programming, culminating in advanced projects in areas of active, cutting edge research. With a judicious choice of topics and a little bit of simplification (keeping the essentials) of the problem, and, finally, hands-on coaching, students surprise themselves (and the instructor) how much they can learn and grow during a semester.

(3) “Teach students how to engage” – This is tricky, but essential. To begin with, it would be good to have a conversation about how we could do this. One part could be to explicitly tell students at the beginning of class what “engaged learning” in the class really means. This would have to go beyond admonishing them to spent so many hours on homework etc. Rather, we need to find ways to demonstrate what an active, engaged learner actually does. Then have them practice it. This can be done in the classroom, through active learning exercises, but needs to go beyond the classroom as well.

Too often, we do not really engage the students,don’t tell them what “deep thinking” about the material really means and then are surprised when understanding is superficial at best.

(4) “Create seamless learning environments on campus”  - This is something that goes beyond the individual instructor. However, as a faculty, I often felt ignorant about student life, i.e. what other resources students have to help them academically, psychologically, and socially, and how I could leverage these resources to help students to better in my class. I am still not sure how we can do this better, but I know we need to, so send me your ideas…

If you want the read Schreiner’s paper, here is the link: http://onlinelibrary.wiley.com/doi/10.1002/abc.20022/abstract