When is an ethics course not an ethics course?

Kestrel watercolorThere seems to be a lot more discussion of ethics in scientific news and articles these days compared to the distant past (e.g. when I was a graduate student). This may be due to an increased complexity in the practice of science — issues like data sharing, for example, are more difficult than they used to be — or an increase in incidents of irreproducible results or actual fraud, or perhaps simple fashions about what’s worth discussing. Various funding agencies, notably the NIH and NSF, now require training in the “responsible conduct of research” (RCR) for graduate students funded by their grants. Though my research group and some of my colleagues’ have implemented ethics discussions in our group meetings, my department as a whole doesn’t have anything of this sort that all graduate students experience. (Other science departments here at Oregon do.) Thinking that this isn’t good, I (perhaps foolishly) volunteered to teach a graduate ethics workshop, which I’ll do next term together with another faculty member, in addition to our usual teaching tasks.

It’s interesting to think about what should go into such a workshop. One key thing I’ve realized is that it’s a mistake to think of the course as an “ethics workshop,” rather than a “workshop on topics in the responsible conduct of research.” Sadly, the latter is unwieldy. The former, though, causes problems, especially in communicating with colleagues. What’s the distinction, and what’s wrong with an “ethics workshop?”

First, I would argue that training in ethics per se is rather pointless. Nearly all of us know that lying, cheating, and stealing are bad, and the tiny fraction of people who don’t grasp this aren’t going to be convinced of the error of their ways by sitting in a classroom. I am reminded, in writing this, of the surreal form the university asks faculty to fill out each year about reporting grant activity and related things that essentially asks, “are you lying?” I showed this to my then-four-year old a few years ago; he recognized that the only possible answer, whether one is honest or dishonest, is “no.” (The kids and I used to discuss Knights and Knaves puzzles a lot…)

Second, the more generally applicable and interesting issues are those that aren’t as straightforward to map onto right and wrong. These are also issues that relate to the social, economic, and structural framework in which science is done. How do we handle data? How does publishing work? I’ll flesh out some examples below. In addition to being relevant to the practice of science, some knowledge about these issues at the start of one’s graduate training can help prevent conflict, frustration, or even the temptations of unethical behavior later on. Also, I’d argue, learning about the “landscape” of science is an important part of being a graduate student.

Referring to a course on RCR as an ethics course is a convenient shorthand, but I’ve learned that it causes confusion. It also, quite rightly, makes some faculty reluctant to support it, for the reasons noted two paragraphs above.


I’ve sketched several topics that would be worth discussing in this proposed RCR workshop. Here they are, with a little bit of commentary:

  • Data handling and management — What are our responsibilities with respect to preserving data, and also making it available to others? What do funding agencies and others say about this? What do we do, in practice, in an age of giant datasets? What distinguishes “raw data” from reduced data? This last question, by the way, is one that has provoked spirited discussion at microscopy conferences I’ve been to.
  • Data integrity — Can one justify throwing out “bad” data points? If so, how, or why? This is a difficult, and very common, question. It connects also to contemporary thoughts on fitting and data analysis; see e.g. this. This topic also spans the handling of images, and image manipulation.
  • Publishing and Authorship — How does the publication processes work, and how is it changing? What are authorship criteria and roles, and what do various professional societies say about them?
  • Research Misconduct and Scientific Fraud — I.e. actual ethics! We should definitely look at case studies, of which there are lots of interesting ones! Arguably the most famous in physics is the story of Jan Henrik Schön.
  • Statistics and ethics — A lack of understanding (or mis-understanding) of statistics, coupled with poor experimental design, underlies the present proliferation of mediocre and irreproducible studies — see e.g. this, this, or this for some snippets of the relevant discussions. This phenomenon is fascinating. But what, one might ask, does it have to do with physics, which is relatively free of the dispiriting methodology that seems to plague, for example, sociology or epidemiology? So far, not much, thankfully. But (i) similar issues come up in physics, for example in the dodgy or delusional ways physicists tend to fit power-laws to everything; and (ii) I would expect issues of statistics and perilous data-mining to become more common in physics, as datasets grow in size and complexity. OK, one replies, but what does this have to do with ethics or RCR? It occurs to me, reading a lot of examples of bad science, that the practices employed are ethical (in the sense of being with a sincere belief in their validity) only if one is ignorant of how to handle noise, uncertainty, and other quantitative aspects of data. But ignorance shouldn’t, of course, be a justification for bad science. Do we then have an ethical obligation to understand how to treat data? I haven’t seen this generally discussed, and it would be interesting to explore further. I’ll note that these ar half-formed thoughts, that may not make it into the course!
  • Ethical issues relating to environment, science policy, and law — (This one is from my co-teaching colleague.) What is the relationship between politically neutral science and areas of public policy that are closely connected to science (e.g. climate change)?
  • More things about how science is done — It’s useful to understand the landscape of science — the flows of money, people, etc. This affects graduate students quite directly, in topics like jobs, funding, etc., and it wouldn’t hurt to have some  exposure to it. As I often do, I’ll note Paula Stephan’s excellent “How Economics Shapes Science” as a resource on this.


The structure of this workshop is still to be determined. The challenges are (i) to satisfy the dictates of the funding agencies, which are very vague, (ii) to make it worthwhile for students, (iii) to avoid taking up too much of research-active students’ time, and (iv) to avoid taking up too much of my time. My own preference is to have weekly 1 hour meetings, not occurring in the middle of the day, for some number of weeks between 5 and 10. Various faculty have spoken in favor of more or less time. I view the Spring launch of this workshop as an experiment — we’ll see what happens!

The workshop itself should be mostly discussion based. There are good readings on most of these topics, e.g. this available free from the National Academies.

Today’s illustration…

…is a kestrel I painted a few weeks ago, shortly after spotting both a kestrel and a bald eagle (not together) on my bike ride to work one morning. The eagle was surveying the Willamette River. The kestrel was standing in the middle of a road, devouring some smaller creature.

A random walk through bookshelves — books and movies 2015

Crow watercolor -- Raghu ParthasarathyA few years ago, after too many instances of starting a book and then realizing that I’d read it before, I began to keep a list of the books I’ve read, making a brief note in it each time I finish something. The list makes it easy to look back on what I’ve read in the past year. Today, on New Year’s Eve, I’ll write a quick post on my favorites of 2015. It doesn’t really fit in with the general themes of the blog, though there is a bit of science in it, and some thoughts on randomness.



Out of 21 books, it’s surprisingly easy to pick my favorite for this year: Your Republic Is Calling You by Young-Ha Kim (2010). It’s a novel about a North Korean spy, living a normal life for many years in South Korea, who is suddenly called back to the North. It gets a surprisingly low average rating on Goodreads (3.5/5.0), perhaps because most people want their spy novels to be action-packed and thrilling. This one is not. Rather, what’s striking about it is its depiction of a possibly sudden end to an ordinary life. Plus, its scenes of North Korea are fascinating and chilling, like seemingly everything about North Korea.

Runners up:

City of Tiny Lights by Patrick Neate (2006). A modern noir with a Ugandan-Indian-British private eye, investigating a political murder. It’s funny, clever, and fast, though it becomes annoyingly implausible in its last quarter.

Serious Men by Manu Joseph (2010). I don’t often read fiction about science because (i) there isn’t much of it, (ii) it’s often bad, and (iii) I spend enough time thinking about science. I picked this one, though, because it’s Indian and because the cover is neat (see below). It’s a cynical and funny novel about scientists, social dynamics, and more. Its characters are too caricature-ish to take the top spot, but it was nonetheless enjoyable. Its depiction of the culture of science, especially “big” science, are remarkably good, and free of the stilted and artificial characterizations of how science works that one usually finds. I noted these lines, which I particularly like: “… he stared at the ancient black sofa. Its leather was tired and creased. There was a gentle depression in the seat as though a small invisible man had been waiting there forever to meet Acharya and show him the physics of invisibility.”


These three books have something in common: I picked them all by randomly browsing the bookshelves at the University library! (There’s an excellent “popular reading” section that I like to look at.) I hadn’t heard of any of them before, or searched for them, or had an algorithm from Amazon recommend them to me. There’s a lot to be said, I think, for random discovery, especially if one wants to find things one didn’t know existed, rather than refinements of things one already knows.


My favorite out of 13 non-fiction books is a very new one: The Planet Remade: How Geoengineering Could Change the World, by Oliver Morton (2015). I read this in the past few weeks, mainly because I’m teaching Physics of Energy and the Environment this term — a course for non-science majors that I’ve taught before — and felt that its topic is one I should explore further. It’s a brilliant book about geoengineering: scenarios, methods, concerns, and more. It’s thoughtful, thorough, and beautifully written. I could write more, but I might turn this into its own blog post.

A very close second is Sahara Unveiled: A Journey Across the Desert by William Langewiesche (1997), about the author’s travels starting from Algeria, south through the Sahara, and west to Mali. It has a wonderful and thoughtful mix of descriptions of the natural landscape and of the remarkable, sometimes inspiring, and sometimes dispiriting people and societies he encounters along the way. Science comes up in a few spots, both directly — there’s a charming section on Ralph Bagnold, a giant in the study of sand dunes — and indirectly, when the author is stranded amid ancient rock art that depicts the rich wildlife the Sahara used to contain, before it became a desert, a topic discussed by Morton as well.

Kids’ books

If I were to travel back and visit my 2005 self, I would suggest that he note down books read with his kids, of which there are a lot of great ones, and which he has trouble remembering. (They aren’t on the present list.) Certainly some highlights of the past year were finishing the 42-book comic book version of the Indian epic, The Mahabharata, with my six-year-old. It’s not surprising that it’s such an enduring story — it’s fascinating, and full of ethical quandaries. There’s apparently a new prose retelling that gets good reviews.

We’ve also read a lot of Asterix comics (e.g.), which I never knew when I was a kid. They’re great. Perhaps as a result, my six year old has become very fond of ancient civilizations, Rome in particular. There are a lot of very good kids books on the topic, such as Rome: In Spectacular Cross-Section, which have been fun to read.


Almost all of my wife’s and my movie watching is via Netflix, whose selection (on physical DVDs) is thankfully vast. The best movie seen this year, out of 16, is the appropriately titled “We Are the Best!” (2013), about a trio of 13-year old girls in Sweden who form a punk band. It’s charming, funny, clever, and uplifting without being at all sappy.

Runners up: All is Lost (2013), An Education (2008), Nobody Else But You (2011). The last of these is perhaps the strangest of the three, a French mystery about a dead small-town starlet whose life mirrored that of Marilyn Monroe.

I can’t think of any deep insights to convey about these movies, or anything that touches on biophysics or science or anything else I usually write about. I should, I suppose, note that none of these movies were found by random browsing, but rather made use of Netflix’s recommendation algorithm. Make of that what you will…

Overall, it was a great year for both books and movies, revealing many new worlds that I wouldn’t have otherwise imagined. We’ll see what 2016 brings.

Happy New Year!

Recap of a graduate biophysics course — Part II

Great grey owl watercolorI’ll continue describing a graduate biophysics course I taught in Spring 2015. In Part I, I wrote about the topics we covered. Here, I’ll focus on the structure of the course — books, assignments, in-class activities, and the students’ final project — and note what worked and didn’t work. (What didn’t work: popsicle sticks.) Click for the syllabus.

My overall learning goals for the course were that students would be able to

  • …understand the physical principles that underlie important biological phenomena such as DNA packing, bacterial motion, membrane deformations, and signaling circuits.
  • …apply statistical and statistical-mechanical ideas to a wide variety of complex systems.
  • …read contemporary papers in biophysics and follow the aims and general approach.

How does one get there?


As I’ve written before, we are fortunate to live in an age in which there are good biophysics textbooks. Most notably,

The first two are “standard” biophysics texts in that they explore the statistical mechanics, electrostatics, and mechanics of DNA, proteins, membranes, and other cellular components, as well as the interplay of forces that control micro-scale biological interactions. Both are excellent books! I felt it would be useful to have an assigned textbook for the course, both for students to refer to, and to make it easier to have reading assignments that freed class time for more specialized discussions and activities. I chose Nelson’s Biological Physics book, mainly because it is more concise and “linear” in its progression of topics. (I did, however, distribute some excerpts from Physical Biology of the Cell, especially on DNA mechanics.) I was a bit worried that Nelson’s book would be too simple, since it’s geared towards undergraduates as well as graduate students, but this wasn’t a problem. The exercises aimed at graduate students are very good, and the straightforward nature of the book helped us move quickly, and gave us material to build on in class.

Bialek’s book is quite different, focusing on noise and signal processing, and the principles underlying things like gene expression, vision, chemotaxis, etc. I took bits from it, on photon detection in vision and on chemotaxis, which are excellent. Overall, it would be interesting to structure a course around this book, but one would miss out on the “mechanical” aspects of biophysics (DNA rigidity, membrane dynamics, etc.), and also on much of the variety that exists in the cellular world; I think these things are crucial for an introductory biophysics course. I must also point out that Bialek’s book is quite difficult — it takes a lot of thought to follow it, which is certainly fine, but that would place severe constraints on a ten week introductory course.

Physical Models of Living Systems is a fascinating book, and I extracted several pieces of it for the section of the course on “Cellular Circuits” (See Part I.) It would be great to make a course focused solely on the topics of this book, embellishing it with more discussion of experimental methods, but this also would take us away from “central” themes in biophysics. Still, it’s an excellent book. (I was fortunate to read and evaluate it before it came out! It’s nice to see my name in print in the acknowledgements!)

I also made use of various parts of Howard Berg’s classic Random Walks in Biology, on diffusion as well as bacterial strategies for motion (runs and tumbles, etc.).


I assigned weekly problem sets, which were usually a mixture of exercises from Nelson’s book and questions I either wrote myself or took from other sources. For an example, see Homework #4. Several of the problems required writing computer simulations, which is an extremely useful skill to practice. In general, the homework assignments went very well. Students noted, however, that the difficulty and time required were very inconsistent between problem sets. I am not surprised by this.

In-class activities

I wanted to integrate active learning into the class: lecturing as little as possible, since it’s a poor way to convey understanding, and having lots of occasions for students to think and do things in class. I do this a lot in my general-education undergraduate classes, but it’s less common in higher level classes.

Quite often, I asked students, either on their own or in a group of two or three, to figure out something that would either lead into a more detailed analysis, or that would in itself illustrate the implications of some physical concept. As an example of the former, we examined how fluorescent correlation spectroscopy (FCS), which measures the intensity fluctuations that result as fluorescent molecules diffuse in and out of microscope’s focal volume, can yield the diffusion coefficient of the molecule. Lazily pasting a sketch:



Ignoring uninteresting numerical factors, we can figure out how the characteristic features of the autocorrelation curve (g(𝜏)), namely the value of g(0) and the location of the inflection point, depend on things like the concentration of molecules, the focal volume, and the molecules’ diffusion coefficient. So, I asked students to do this, which went quite well. Then, I went through the derivation of the exact expression for g(𝜏) — a long slog, which strengthened my resolve not to fill our class time with tedious and unenlightening things like this.

More interesting, and somewhat related, examples came from our investigations of the physical constraints on bacterial chemotaxis. If we consider a bacterium capturing nutrients that surround it at some mean concentration c_0, and “measuring” the number it has acquired over time t_m, how accurately can it measure c_0? This is an important question, since the bacterium will “want” to migrate to regions of higher nutrient concentration, and so will need to know whether a perceived increase in food abundance is real, or just a statistical fluctuation due to the randomness of diffusion. I asked students to figure this out, which not only helped really cement ideas of noise and fluctuations — more so than just hearing me state them — and also led to more interesting questions like: since the measurement accuracy increases with increasing t_m, why shouldn’t the bacterium just increase t_m arbitrarily? What physical limits constrain the measurement time?

Contemporary papers

We discussed a lot of contemporary papers in class. Just to give a few examples: after covering FCS above, we looked at a remarkable paper from Carlos Bustamante’s group in which the authors measured the enhanced diffusion of an enzyme due to the heat released from its chemical activity. We spent quite a while discussing experiments from Kazuhiko Kinosita’s group on the workings of ATP synthase, a fascinating rotary molecular motor (e.g. this and this), which involved questions like “How can we relate the observed motion of objects attached to the protein complex to the work done by the complex?” and “”How close is ATP synthase’s performance to fundamental thermodynamic limits on machines?” We looked at several examples of cellular circuits, including one that relates to my lab’s interests in gut microbial activities. In general, using class time to discuss contemporary research went very well, both in terms of being interesting in itself, and for illustrating the connection between topics of the course and actual research activity.

Discussing readings

I also had students read and briefly comment on sections of the textbook, or other readings, in class. This not only freed class time — i.e. not having to examine in detail things that are well-explained elsewhere — but also gave me a good sense of how well students understood things. In retrospect, I should have done more of this. I didn’t in part because it takes a good amount of advance planning to map out exactly what students should present on, and in part because I was perhaps too used to lower-level undergraduate courses in which students don’t do very well with exercises that require independent reading and thinking. It was very liberating to teach a graduate course — all the students actually read and think!

What didn’t work: popsicle sticks

Overall, designing the course to be very active was great — I think students learned a lot, and it made the course lively and fun to teach. I would definitely run the class similarly the next time I teach it.

The one major failure of my active learning approach was my tactic for getting a variety of voices in-class: writing each student’s name on a popsicle stick, and picking a stick at random to call on someone to answer a question. I took this idea from an activity I did last year with my older son’s fourth grade class where it worked wonderfully. Here, however, it was a disaster. Despite everyone in principle accepting the idea that it’s fine to say wrong things, or respond to questions with further questions about what’s unclear, students really didn’t like being put on the spot by random forces. I distributed a mid-term evaluation to get feedback on how the course was going; the feedback was very positive with the near-universal exception of the popsicle sticks! I acquiesced, therefore, and got rid of them. I’m not completely happy with this — without the random selection, some of the quieter students very rarely spoke up, and it would have probably helped them, unpleasant though it may be, to practice being more outspoken.

Final Project

The students each did a final project, for which my goals were that they

  • Learn more about biophysical topics.
  • Practice constructing a research question.
  • Think about experimental design, and how it relates to the questions we ask.

In other words: I wanted students to build on their understanding of the topic of biophysics, but also enhance their understanding of how biophysics progresses. Each student gave a 15 minute (+ questions) presentation in class that covered the background of their chosen topic, a statement of something unknown plus reasons to care that it should be understood, and the experimental design of a study to investigate it.

This was asking a lot. It went fairly well, but there is considerable room for improvement. Not surprisingly, student’s coverage of the background science was generally good. The topics chosen included embolisms in the xylem of plants, bacteriophages and human diseases, ways of modeling actin networks, and more. In retrospect, we should have spent much more time iteratively working on plans for hypothetical future experiments, critiquing methods and their potential outcomes. Planning experiments is very difficult! It’s hard to really do this well in a ten week course, however; we only started dealing with the final projects with a few weeks to go in the term. Still, overall, the projects were enjoyable to listen to, and I think people learned quite a bit from them.

Concluding thoughts

As mentioned in Part I, I consider the course overall a great success. It took a lot of work to put together, but it was very enjoyable and stimulating to teach, and students liked it a lot. I’m not teaching it in 2015-16 — it’s rare here to offer graduate electives in consecutive years — but I expect that I’ll teach it again in the near future. For anyone else thinking of teaching a similar course: I’m happy to share any of my materials, including about 90 pages of notes on the day-to-day content of the class.

When designing the course, one thing I had worried about was whether, given the breadth of biophysics and the variety of topics we’d be exploring, the subject would be seen as having any overall coherence. It’s not obvious, even to biophysicists, that it does. It was satisfying to see that the course did indeed hold together — that the themes of biological materials interacting via physical forces, quantitative analyses of dynamical systems, and the overarching roles of statistical mechanics and random process really did tie the class together.

Today’s illustration is again a painting based on a photo from Owls by Marianne Taylor.

Recap of a graduate biophysics course — Part I

owl watercolor

In Spring 2015 I taught a graduate biophysics course for the first time. It was a first in several ways: the course didn’t exist before, so I developed it from scratch, and it was also the first graduate course I’ve taught in my nine years as a professor! I’ve been thinking for months that I should write a summary of how it went, especially because such classes are uncommon enough that describing what worked and what didn’t work might be useful for others.

Overall, the class was a great success. It was fun and rewarding to teach — though it took a lot of work — and the students seemed to get a lot out of it. There were ten graduate students and one undergraduate enrolled, which is large for a graduate elective at Oregon. The student evaluation scores were the highest I’ve ever received, averaging 4.7 out of 5.0 in seven categories.

Here, I’ll describe some of the topics we explored. In the next post, I’ll describe the structure of the course: in-class activities, books and other sources, student projects, and more.


There were several themes I wanted to cover:

  • the major roles that statistical mechanics and, relatedly, randomness and probabilistic processes, play in biophysics
  • the mechanics of cellular structures
  • cellular circuits: how cells construct switches, logic gates, and memory elements
  • special bonus theme: amazing things everyone should be aware of

A random walk through biophysics

Much of the course explored the roles of statistical mechanics and, more generally, randomness and probabilistic processes, in biophysics. This included the physics of random walks and Brownian motion, and experimental methods for measuring diffusive properties of proteins and other molecules. We spent quite a while exploring how Brownian motion and other physical constraints impact the strategies that microorganisms use to perform various tasks. For example:

  • Why aren’t there microscopic “baleen whales,” that scoop up nutrients as they swim through water?
  • Why is it a good idea for a bacterium to cover just a tiny fraction of its surface with receptor molecules?
  • Why are bacteria small? How can some bacteria be huge?
  • How can bacteria migrate towards regions of higher nutrient density? What are the physical limits on the sensitivity of chemotaxis, and how close do bacteria come to these limits?

I’ve commented on some of these topics in past blog posts, for example this one on the non-intuitive nature of diffusion-to-capture.

More generally, we studied several examples of how understanding probabilistic processes enables insights into all sorts of systems. These ranged from recent examples like using brightness fluctuations to quantify the number of RNA molecules in single cells, and also discover “bursts” of transcription (Golding et al. 2005), to classic examples like the famous Luria-Delbrück experiment. In all these cases, a deep message is that that probability distributions encapsulate a great deal of information. The variance of some quantity, for example, may be as informative as its mean.

The mechanics of cellular structures

Understanding the physical properties of biological materials and how they matter for the functioning of living things is central to biophysics, and so we of course discussed the rigidity of DNA, the electrostatics of viral assembly, phase separation in lipid membranes, and other such topics. The connections to randomness and statistical mechanics are clear, since entropic forces and thermal fluctuations are huge contributors to the mechanical properties of these microscopic objects.

As one of many examples of the interplay between energy and entropy, I’ll note here DNA melting — the separation of the two strands of a DNA double helix at a particular, well-defined temperature. Before examining it, we learned about PCR (polymerase chain reaction), the method by which fragments of DNA are duplicated over and over, enabling bits of crime scene debris or tainted food to be analyzed for their genetic fingerprints. Repeated cycles of melting and copying are the essence of PCR, so understanding DNA melting of practical concern, as well as being very interesting in itself. Why does DNA have a melting temperature? This is a question whose answer seems obvious, then less obvious, and then interesting as the amount of thought one puts into it increases. At first, one might find it unsurprising that DNA separates at some well-defined temperature. After all, water melts at some particular temperature, and countless other pure materials have well-defined phase transitions. Looking further, however, one can think of DNA as a “zipper” whose links form a 1-dimensional chain, each with a lower energy when closed (base-paired) than open. With a bit of statistical mechanics, it’s easy to show that this chain won’t have a sharp melting transition, but rather will gradually open with temperature — a common property of one-dimensional systems [1]. The puzzle is resolved, however, by properly considering entropy: the double-stranded DNA might open at points in the middle, forming “bubbles” of open links (see below). These links cost energy but, crucially, increase the entropy of the molecule, since the bubble halves can wobble and fluctuate. Above a critical temperature, the entropic free energy wins over the energetic benefit to staying linked — bubbles grow, and DNA melts!

From: M. Peyrard, Biophysics: Melting the double helix. *Nat. Phys.* **2**, 13–14 (2006).

From M. Peyrard, “Biophysics: Melting the double helix.” Nature Physics 2: 13-14 (2006).

One of the things I especially like about the course is that we can consider “universal” materials like DNA and membranes, but also very specific materials, manifestations of the variety of life. For example, we looked at studies of Vorticella, a one-celled organism that can propel itself several body lengths (hundreds of microns) in milliseconds by harnessing the power of electrostatic forces to collapse bundles of protein fibers.

Cellular circuits: how cells construct switches, logic gates, and memory elements.

Cells do more than build with their components, they also compute — making decisions, constructing memories, telling time, etc. Our understanding of this has blossomed in recent years, driven especially by tools that allow us to create and manipulate cellular circuits. My own thinking about this, especially with respect to teaching it, was influenced heavily by Philip Nelson’s excellent recent textbook Physical Models of Living Systems, which I’ll comment on more in Part II.

We began by learning the basics of gene expression and genetic networks and then moved on to feedback in these networks and schemes for analyzing bistable switches. The physical modeling of these circuits leads to two interesting observations: (i) that particular circuit behaviors are possible in particular regions of the parameter space, which correspond to particular values of biophysical or biochemical attributes, and (ii) that the analysis of these sorts of networks is exactly the same as that of other dynamical systems that physics students are used to seeing. Neither of these are surprising, but they’re worth discussing, and they tie back to my question to myself before the course of whether to include this topic of cellular circuits. In retrospect, I’m very glad I did, not only because it’s  important, but because it highlights the power of quantitative analysis in biological systems separate from concepts of mechanics or motion. Since this sort of analysis is deeply ingrained in physics education, it provides yet another route for physicists to impact the study of living systems. Of course, it doesn’t have to be so. One could imagine a world in which mathematical analysis was as ingrained into biological education as it is in physics, but despite occasional pleas to make this happen, such a world is far removed from ours.

Amazing things

I decided to end the course with some very amazing, very recent developments in how we examine or understand the living world, regardless of whether or not one would classify them as biophysical. I picked three. I’ll pause for a moment while you guess what they are… (While waiting, you can look at two more owl illustrations. The one at the top is mine; these are from the kids. All are based on photos from the excellent Owls by Marianne Taylor.)



One was CRISPR / Cas9, the new and revolutionary approach to genome editing. As readers likely know, CRISPR has generated a frenzy of excitement, more than any scientific advance I can think of of the past decade. While tools for manipulating genomes have existed for a while, CRISPR / Cas9 provides a method to target essentially any sequence simply by providing a corresponding sequence of RNA that guides a DNA-cleaving enzyme. This would be worth covering just for its scientific impact, but it more broadly brings up issues of ethics and social impact. How could one go about, for example, engineering human embryos, or destroying pathogenic species? Would one want to? The story behind CRISPR provides a great illustration of the power of basic science. Its discovery in bacteria, from studies of their battles with viruses, was quite a surprise. It’s likely that surprises of similar magnitude still await us in unexplored corners of the living world. Connecting CRISPR to biophysics isn’t hard, by the way, since its mechanisms of operation are closely tied to the mechanics of bending and cutting DNA.

The second “amazing” topic is DNA sequencing. The cost of sequencing has fallen by orders of magnitude in recent decades. We’re close, for example, to being able to sequence an entire 3 billion base pair human genome for $1000! All this is driven by physically fascinating technologies — for example, detecting the ions released from a single nucleotide being added to a growing DNA strand, or the electrical current fluctuations as a single DNA molecule snakes through a nanopore.

The final amazing topic was optogenetics, the optical control of genetically encoded control elements. Using light-activated ion channels, for example, researchers can selectively turn neurons on and off in live organisms, a real-life version of science fiction-y mind control. Here again, the connections between technology and basic research are clear. Channelrhodopsin, one of the first and most useful proteins to be used and modified for optogenetic ends, was discovered in studies of unicellular algae.

Overall, this excursion was great. It tied into the main substance of the course better than I expected, and the students clearly shared my excitement about these topics. It was also noted that this sort of connection to cutting edge developments is sadly lacking in most physics courses.

Next time…

In Part II, I’ll describe some of the “active learning” approaches I implemented, which went well with one exception, and I’ll also discuss books, readings, and assignments. (For a glimpse of all this, you can see the syllabus.) I’ll note both then and now that all of my materials for the course are available to anyone thinking of teaching something similar — feel free to email me.



[1] For a simple treatment of the “zipper” problem, see C. Kittel, “Phase Transition of a Molecular Zipper.” Am. J. Phys. 37, 917–920 (1969). The paper generally considers the case of a zipper in which each link has one closed state and “g” open states. The g=1 case is quick to consider, and is a nice end-of-chapter exercise in Kittel and Kroemer’s Thermal Physics (an undergraduate statistical mechanics textbook), which is where I first encountered it. For g=1, there is no sharp phase transition. The g>1 case gives a sharper transition, but one shouldn’t spend much time thinking about it, since it’s much more realistic to think about bubble formation rather than DNA unzipping from its ends.

In memoriam: Steven Vogel


I was sad to learn that Steven Vogel passed away yesterday. He was a giant in the field of biomechanics, and his books on the subject are brilliant, fascinating, and fun. I’ve lost count of how many people I’ve run into who, like me, have found these books deeply inspirational. The first one I read was Life’s Devices: The Physical World of Animals and Plants, which remains a favorite, full of well-explained examples of how life is “engineered” — how the mechanics of fluid flows, forces on beams, velocities, and viscosities dictate and illuminate how living things work. Why can’t bacteria swim like dolphins? How do prairie dogs keep from suffocating in their burrows? Why do big animals need such thick bones? Vogel’s writings spanned a remarkably diverse set of subjects, from elephants to ants to fungal spores to plants, and conveys to the reader a deep sense of how physics and biology are intimately related.The books occupy a curious middle-ground between books for specialists and books for the non-scientist general reader; they are warm, conversational, and don’t require advanced knowledge of physics or biology, but they do contain “real” science, with equations when necessary.

Prairie dog burrow

From “Life’s Devices.” A prairie dog, and airflows generated in its burrow by the geometry of the tunnel entrances.

It’s very rare to find books that really change the way one looks at the world, but Vogel’s did just that, showing that woven amid the remarkable diversity exhibited by the living world run unifying threads of physical function. And just as we develop a deeper appreciation of the planets by understanding the simple laws that govern their motions, we gain a deeper appreciation of our fellow organisms by understanding the forces that guide them.

In my own work, I don’t study anything macroscopic. My lab looks a lot at larval zebrafish, a few millimeters long, but even here we focus on the microscopic bacteria within them. My group’s work on membranes is also very small-scale. Nonetheless, the perspective that we can gain insights into these systems by considering their material properties and spatial structure is central to our work, and to a large swathe of modern biophysics. It is, however, not a universal belief, and there’s a constant tension with the view, often implicit, that cataloging the pieces of living systems, especially the genes that “cause” various processes or the networks that link genes together, is equivalent to understanding life.

I’m happy that a few years ago I met Steven Vogel, at a conference on education at the interface of physics and biology. He was energetic and very friendly. We corresponded a bit by email afterwards; I was keen to get his thoughts on an article I was writing on the biophysics-for-non-science-majors course I had developed. (I’ve assigned several excerpts from his books when teaching the class.) His comments were warm and insightful. I’ve thought often of elaborating on materials I’ve written for the class to write a popular book on biophysics. I’ve also thought that if I were to do so, it would be great to get Professor Vogel’s comments — sadly, it is now too late for that. I do hope that someday I’ll write something substantial, and that it will have at least some of the spirit and charm of Vogel’s books.

Today’s illustration: a sea turtle I painted a few weeks ago. The entry for “sea turtle” in the index of Life’s Devices:

sea turtle. See turtle, sea, y’see.

The text explores the hull shapes of boats and buoyant animals, including baby sea turtles.

Learning about (machine) learning — part II

Mega Man X -- colored pencil, RPIn Part I, I wrote about how I started exploring the topic of machine learning, and we looked briefly described one of its main aims: automating the task of classifying objects based on their properties. Here, I’ll give an example of this in action, and also describe some general lessons I’ve drawn from this experience. The first part is probably not particularly interesting to most people, but it might help to make the ideas of Part I more concrete. The second part gets at the reasons I’ve found it rewarding to learn about machine learning, and why I think it’s a worthwhile activity for anyone in the sciences: the subject provides a neat framework for thinking about data, models, and what we can learn from the both of them.

1 I’d recognize that clump anywhere

I thought I’d create a somewhat realistic but simple example of applying machine learning to images, to be less abstract than the last post’s schematic of pears and bananas. My lab works with bacteria a lot, and a very common task in microbiology is to grow bacteria on agar plates and count the colonies to quantify bacterial abundance. (In case you want to make plates with your kids, by the way, check out [1].) Here’s what a plate with colonies looks like:

from_Sep11_111553amThe colonies are the little white circles. Identifying them by eye is very easy. It’s also quite easy to write a non-machine-learning program to select the colonies, defining a priori thresholds for intensity and shape that distinguish colonies quite accurately. (In fact, I’ve assigned this as an exercise in an informal image analysis I’ve taught.) But, for kicks, let’s imagine we aren’t clever enough to think of a classification ourselves. How could we use machine learning?

1.1 Manual training

We first need to identify objects — colonies and things that aren’t colonies, like streaks of glare. Let’s do this by simple intensity thresholding, considering every connected set of pixels that are above some intensity threshold as an object. (Actually, I first identify the circle of the petri dish, and apply high- and low-pass filters, but this isn’t very interesting. I’ll comment more on this later.)

Filtered (left) and filtered and thresholded (right) plate images.

Filtered (left) and filtered and thresholded (right) plate images.

Next we create our “training set” — manually identifying some objects as being colonies, and some as not being colonies. I picked about 30 of each. The non-colonies tend to be large and elongated, or very small specks:

Caption: manually identified colonies (green) and not-colonies (red). Black objects have not been classified.

Caption: manually identified colonies (green) and not-colonies (red). Black objects have not been classified.

For each object, whether or not it’s in the training set, we can measure various properties: size, aspect ratio, intensity, orientation, etc. Like the pear and banana example in Part I, we want to create a boundary in the space of these parameters that separates colonies and not-colonies. What parameters should we consider? Let’s look at the area of each object relative to the median area of colonies, and the aspect ratio of each object, since these seem reasonable for distinguishing our targets. You might be aghast here — we’re having to be at least a little clever to think of parameters that are likely to be useful. What happened to letting the machine do things? I’ll return to this point later, also.

For colonies and not-colonies, what do these parameters look like? Let’s plot them — since I’ve chosen only two parameters, we can conveniently make a two-dimensional plot.

training_pointsIt does seem like we should be able to draw a boundary between these two classes. We’d like the optimal boundary, that maximizes the gap between the two classes, since this should give us the greatest accuracy in classifying future objects. Put differently, we not only want a boundary that separates colonies from non-colonies, but we want the thickest boundary such that colonies are on one side and non-colonies on the other. Technically, we want to maximize the “margin” between the two groups. A straight-line boundary is fairly straightforward to calculate, but it’s obvious that such a boundary won’t work here. Instead, we can try to transform the parameter space such that in the new space we can aim for a linear separation. One might imagine, perhaps, that instead of area and aspect ratio, the coordinates in the new space are area^3 and (aspect ratio)^2*area^4, for example. Remarkably, one doesn’t actually need to know the transformation from the normal parameter space; all we need is the inner product of vectors in this space. This approach, of determining optimal boundaries in some sort of parameter space, is that of a support vector machine, one of the key methods of machine learning. The actual calculation of the “support vectors,” the data points that lie on the optimal margin between the two groups, is a neat exercise in linear algebra and numerical optimization. The support vectors for our “training set” of manually-curated groups in the bacteria plate image are indicated by the yellow circles above.

There is, as one might guess, a great deal of flexibility in the choice of transformations. There is also freedom in setting the cost one assigns to objects of one class that lie in the territory of the other class. (In general it may be impossible to perfectly separate classes — imagine forcing a linear boundary on the training data above — so setting this cost is important.)

1.2 Does it work?

Now we’ve got a classifier — the “machine” has learned, from the data, a criterion for identifying colonies! I had to specify what parameters were relevant, and a few other things, but I never had to set anything about what values of these parameters differentiate colonies from non-colonies. We can now apply this classifier to new data, such a completely new plate image, using the same support vectors we just learned. If all has gone well, the algorithm will do a decent job of identifying what is and isn’t a colony. Let’s see:

Left: a new plate image. Right: classification of objects. Blue = colonies; yellow = not-colonies.

Left: a new plate image. Right: classification of objects. Blue = colonies; yellow = not-colonies.

Not perfect, but pretty good! We could improve this with a larger set of training data, and by considering more or different parameters (though this can be dangerous, as I’ll get to shortly). Here’s what the object features look like:

test_pointsAgain, it seems pretty good. There are probably a handful of mis-classified points. (I’m not going to bother actually figuring out the accuracy.)

So, there it is! Machine learning applied to bacterial colonies. If you look at the plates, you can see regions in which colonies have grown together, making two conjoined circles. We could go further and “learn” how to identify these pairs, again starting with a training set of manually identified objects. We could also iterate this process, finding errors in the machine classification and adding this to our training set. The possibilities are endless…

1.3 How sausage is made

Now let’s return to the several issues I glossed over. We first notice that we needed human input at several places, besides the creation of the training data set: identification of the plate, image filtering, choices of parameter transformations, etc. This seems rather non-machine-like. In principle, we could have learned all of these from the data as well: classifying pixels based as belonging to plate or non-plate, examining a space of possible filtering and other image manipulations, etc. However, each of these would have a substantial “cost” — a vast amount of training data on which to learn the appropriate classification. If we’re Google and have billions of annotated images on hand, this works; if not, it’s neither feasible nor appealing. Recall that we started using machine learning to avoid having to be “clever” about analysis algorithms. In practice, there’s a continuum of tradeoffs between how clever we need to be and how much data we’ve got.

We should be very careful, however. In general, we’ve got a lot of degrees of freedom at our disposal, and it would be easy to dangerously delude ourselves about the accuracy of our machine learning if we did not account for this flexibility. We could try, for example, lots of different “kernels” for transforming our parameter spaces; we may find that one works well — is this just by chance, or is it a robust feature of the sort of data we are considering? It’s especially troublesome that in general, the task of learning takes place in high-dimensional parameter spaces, not just the two-parameter space I used for this example, making it more difficult to visually determine whether things “make sense.”

2 Learning about data

Was learning about machine learning worthwhile?

From a directly practical point of view: yes. As mentioned at the start of the last post, my lab is already using approaches like those sketched above to extract information from complex images, and there’s lots of room for improvement. Especially if I view machine learning as an enhancement of human-driven analysis rather than an expectation that one’s algorithms act autonomously to make inferences from data, I can imagine many applications in my work. It has been rewarding to realize the continuum noted above that exists between human insight / little data and automation / lots of data, and it’s been good to learn some computational tools to use for this automation [2].

But this adventure has also been worthwhile from a broader point of view. The subject of machine learning provides a useful framework for thinking about data and models. Those of us who have been schooled in quantitative data learn a lot of good heuristics — having lots of parameters in a model is bad, for example. As Fermi famously said, “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” More formally, we can think of model fitting as having the ultimate goal of minimizing the error between our model and future measurements (e.g. the second, “test” plate above), but with the constraint that all we have access to are our present measurements (the “training” plate). It is, of course, easy to overfit the training data, giving a model that fits it perfectly but that fails badly on the future tests. This is both because we may be simply fitting to noise in the training data, and because fitting models that are overly complex expands the ways in which we can miss the unknowable, “true” process that describes the data, even in the absence of noise.

Much of the machine learning course dealt with the statistical tools to understand and deal with these sorts of issues — regularization to “dampen” parameters, cross-validation to break up training data into pieces to test on, etc. None of this is shocking, but I had never explored it systematically.

What was shocking, though, was to learn a bit of the more abstract concepts underlying machine learning, such as how to assess whether it is possible for algorithms to classify data, and how this feeds into bounds on classification error (e.g. this). It’s fascinating. It’s also fairly recent, dating in large part just a few decades into the past. I generally think I’m pretty well read in a variety of areas, but I really was unaware that much of this existed! It’s a great feeling to uncover something new and unexpected. That in itself would have made the course worthwhile!

3 Learning about Mega Man

Continuing the illustration theme of Part I, I drew Mega Man (at the top of the post), which took far more time than I should admit to. K. did a quick sketch:

K_megaman_Oct2015[1] The present issue of Cultures, from the American Society for Microbiology, is the “Kid’s Issue.” Click here for the PDF and here for the “Flipbook.” Pages 96-97 describe how to make your own gel in which to grow cultures.

[2] I’ve written everything I’ve done, both for the course and these examples, in MATLAB, using the LIBSVM library [https://www.csie.ntu.edu.tw/~cjlin/libsvm/]. For one homework assignment, I wrote my own support vector machine algorithm, which made me realize how wonderfully fast LIBSVM is.

Learning about (machine) learning — Part I

mega_man_Suryan_Oct2015Machine learning is everywhere these days, as we train computers to drive cars, play video games, and fold laundry. This intersects my lab’s research as well, which involves lots of computational image analysis (e.g.). Nearly everything my students and I write involves writing or applying particular algorithms to extract information from data. In the past two years or so, however, we’ve dipped our toes into some problems that may be better served by machine learning approaches. (I explain the distinction below.) “We” was really Matt Jemielita, a recently-graduated Ph.D. student (now off to Princeton), who applied basic machine learning methods to the classification of “bacteria” and “not-bacteria” in complex images.

Given all this — its relevance to the contemporary world and to my research — I thought I should dive more systematically into understanding what machine learning is and how to apply it. I’m certainly not an expert on the subject, but here’s an account of how I went about exploring it. This will be continued in Part II, which will go into why I’ve found the topic fascinating and (probably) useful as a framework for thinking about data. Also in Part II, I’ll give an example of machine learning applied to analyzing some “real” images. In Part I, I’ll mostly describe how I learned about learning, and all you’ll get as an example is a silly schematic illustration of identifying fruits.

1 Starting 34th grade

My usual approach when learning new topics is to read, especially from a textbook if the subject is a large one that I want to cover systematically. This time, however, I decided to follow a course, watching pre-recorded lectures on-line and doing all the homework assignments and exams. The class is “Learning from Data (CS156),” taught at Caltech by Professor Yaser Abu-Mostafa (see here for details: https://work.caltech.edu/telecourse.html). It’s a computer science course, intended for a mix of upper-level undergraduates and lower-level graduate students. All eighteen lectures are available via YouTube, and the course was explicitly designed to be made publicly accessible. I had read good things about the course on-line. I can’t really remember how I picked it over another popular machine learning course, Andrew Ng’s at Stanford, but I did notice that the videos of the Caltech course were aesthetically more pleasant. (The professor has very nice color palettes of shirts and jackets and ties. I briefly wondered if I should wear ties when lecturing in my own classes — but only very briefly.)

The course was excellent: clear, interesting, and well-organized. It’s well known that viewership of on-line courses and lectures drops precipitously as the course goes on, and this appears to the case for this class as well, at least as measured by YouTube views of each of the lectures:

Views of “Learning from Data (CS156)” lectures on YouTube

Views of “Learning from Data (CS156)” lectures on YouTube, as of Dec. 17, 2014. The spikes are the classes on neural networks and support vector machines — more on the latter later.

My own rate of progress was very non-uniform. I started the course during the 2014-15 Winter break, when I had relatively large amounts of time; I finished close to half the course in three weeks (plotted below). Then, when the academic term started, time became more scarce. When the Spring term started — and I was teaching a new biophysics graduate course — large blocks of time to spend on machine learning essentially disappeared. I finally watched lecture #18 in June, about four months after lecture #17! It was September before I finished the final exam. Still, I did it, and I managed to average about 90% correct on the homework assignments, which generally involved a good amount of programming. I scored 100% on the final exam.


Days on which I watched lectures 1-18. The dashed line indicates the start of the Winter 2015 term.

2 Active and Passive Learning

The lectures, as mentioned, were great — clear and focused, while also projecting a warm and enthusiastic attitude towards the subject and the students. It’s interesting, though, that they were vastly different in style from the classes I teach. They were purely lectures, without any “active learning” activities — no clicker questions, no interactive demonstrations, no discussions with one’s neighbors (which in my case would have involved me either pestering my kids or random people at a café). Though I’m a great fan of active learning, I have to say that this was wonderful. How do I reconcile these thoughts? It’s important to keep in mind that one of the main effects of active learning methods is student engagement — not just getting students interested in the topic, but getting them to retrospectively and introspectively think about what they’re learning and whether they understand it. However, one of the reasons adopting active learning methods when teaching seems, at first, odd is that many of us who have succeeded as academics are the sorts of people who independently do this sort of thinking. I watch the lectures; I take notes; I re-examine the notes and think about the logic of the material; I reconstruct the principles underlying homework questions as I work on them; etc. (Normally I might also think of questions to ask, but that’s not really feasible here.) With this approach, a “straight” lecture is not only fine, but it’s extremely efficient.

3 Machine Learning and Classification

So what exactly is machine learning? In essence, it’s the development of predictive models by a computer (“machine”) based on characteristics of data (“learning”), in contrast to models that exist as some fixed set of instructions. Very often, this is applied to problems of classifying data into categories; in machine learning, the goal is to not have an a priori model of what defines the category boundaries, but rather for the algorithm to itself learn, from the data, what classifiers are effective.

Here’s an example: suppose you had a bunch of pears and bananas and wanted to identify which is which from images. Your program can recognize the shape and color of a fruit. Imagine that for each of many fruits you were to plot the fruit’s “yellowness” (perhaps red/green in an RGB color space) and some measure of how symmetric it is about its long axis. In general, bananas are yellower and less symmetric than pears, so you’d expect a plot like this:

bananas_pears_classifer_graph_with_fruitsThere’s a lot of variation in both sets of points. Some pears are yellower than others, and while nearly all bananas are curved, some views of them will appear more symmetric than others. Nonetheless, we can easily imagine drawing a curve on the plot that does a good job of separating the pears from the bananas, so that if we encounter a new fruit, we can see where in the partitioned landscape its symmetry and yellowness lie and decide from that what fruit it is.

banas_pears_classifer_graph_with_separatorThe goal in machine learning is to have the computer, given “training” data of known pears and bananas, determine where this boundary should be. This is quite different from the usual approach one takes in analyzing data, which is more akin to figuring out ahead of time some model of banana and pear morphologies and appearances, and evaluating the observed image characteristics relative to this model. (To give a less convoluted example: imagine identifying circles in images by applying what one knows about geometry, for example that all points on the circle are equidistant from the center. A purely machine learning approach, in contrast, would consist of training an algorithm with lots of examples of circles and not-circles, and letting the boundary between these groups form wherever it forms.) Roughly speaking, the non-machine-learning approach is “better” if it’s feasible: one has an actual model for one’s data. However, there are countless cases for which it’s too complicated or too difficult to form a mathematical model of the data of interest, but for which abundant examples on which to “train” exist, and that’s where machine learning can shine.

Even in the contrived example above of bananas and pears, we can see from the graph that it’s not actually obvious how to draw the separator between the two fruits. Do we draw a line, or a curve? A gentle curve, which leaves some data points stranded in the wrong territory, or a convoluted curve, which gets the training data exactly “right,” but seems disturbingly complex for what should be a simple classification? Considering these dilemmas is central to the practice of machine learning. Since this post is getting long, I’ll save that for Part II, in which I’ll also show a “real” example of applying machine learning to a task of object classification in images. I’ll also try to describe why I’ve found exploring this topic worthwhile — beyond its practical utility, it provides a nice framework for thinking about data and models.

Today’s top-of-the-post illustration is Mega Man, by S. (age 6). Mega Man is a robot who looks like a boy. In the innumerable comic books my kids have read about him, I don’t think machine learning algorithms are discussed. I could be mistaken, however.

To be continued… [Part II]