Worm Positioning Systems

CElegansIt’s great to find a scientific paper that reports something really new — something interesting whose very existence as a substance or behavior or phenomenon was unexpected. It’s doubly great if the paper itself is clear, thorough, and convincing. And it’s almost too much to ask, in addition, for for the topic to be biophysical — something that illustrates the connections between living organisms and physical forces or mechanisms. All of these stars aligned a few weeks ago when I came across the following paper in eLife:

Magnetosensitive neurons mediate geomagnetic orientation in Caenorhabditis elegans (http://dx.doi.org/10.7554/eLife.07493)

C. elegans is a soil-dwelling roundworm. It’s an immensely popular model organism and has been intensely studied for decades. It was the first multicellular organism to have its genome sequenced, the connectivity between each of its few hundred neurons is known, and the pattern of divisions that give rise to each cell in its body have been thoroughly mapped. One would think that every aspect of its sensory capabilities would have been noticed and remarked upon by now. But no: no one had looked at whether it can navigate using magnetic fields. The authors of the paper noted that this may be worth investigating: in real life, these worms burrow through soil, and might need a way to distinguish up from down that local magnetic fields might provide. Moreover, the mechanisms by which other animals sense magnetic fields (various birds, sea turtles, and others) remain quite mysterious, so finding this ability in an experimentally tractable creature would be useful.

The authors constructed simple, elegant experiments monitoring the direction C. elegans travel under various applied magnetic fields. (I recommend reading the paper itself, but here’s a summary of the findings.) They discovered that magnetic fields strongly guide the worms, and more strikingly, that the worms do not travel along the field vector, but rather at an angle to the field that corresponds to the angle between the local magnetic field direction and the vertical in Bristol, England, where the organisms are from. C. elegans from Australia traveled preferentially at a nearly opposite angle to the field as their British counterparts, corresponding to the nearly opposite field angle down under. Specimens from all over the globe migrate at angles in accord with their local field, strongly implying that they can use the magnetic field to distinguish up and down.

In itself, these experiments and measurements would be wonderful, indicating a previously unknown “sense” in these animals. The authors went even further, however, and screening various mutants they were able to identify particular sensory neurons that are necessary for the magnetic field sensing, and even visualize (with calcium imaging) these neurons “lighting up” when magnetic fields were applied!

What exactly these neurons are (physically) doing is a mystery. Apparently, they have lots of rod-like villi at their end, and one might imagine that subtle motions or deflections induced by magnetic fields trigger the activation of membrane channels, rather similarly to the mechanism behind hearing. What the motions and deflections are, and what materials transduce them, would be fascinating to uncover.

In all, it’s a wonderful paper, and one of my favorites that I’ve read in the past year. The only sour note struck is not in the article itself, but in the university (UT Austin) press release about the work, which notes that it “might open up the possibility of manipulating magnetic fields to protect agricultural crops from harmful pests” — apparently even the most elegant and insightful science needs a ridiculous comment about “practical” applications.

Bacteria and Boltzmann

(A more technical post than most, in which we look at bacterial population densities, and watch a movie.)

bacteria in a gradientThis term I taught a graduate biophysics course, about which I’ll write a recap later. Bacteria came up a lot in the course, both because they’re important but also because they offer many nice illustrations of how physical principles of diffusion, hydrodynamics, and stochasticity govern how life works. A benefit of teaching a graduate course is that I learn things, too! I’ll describe here something about chemotaxis that I wasn’t previously aware of.

Motile bacteria will migrate towards higher concentrations of attractants (e.g. nutrients). Bacteria like E. coli move in a sequence of roughly straight-line, constant velocity “runs”…

run_transparent_June2015… separated by “tumbles” at which they randomize their direction:

tumble If the bacterium senses that the nutrient concentration is increasing with time, the probability per unit time of tumbling decreases (and so, on average, the run length increases). In other words, if the bacterium is moving in a “good” direction, it’s more likely to continue in that direction. All this well known, nicely described, for example in Howard Berg’s classic book, which I first read at least a decade ago (2000?).

The new (to me) part: Given a bunch of bacteria, it’s clear that the process described above will increase the bacterial concentration at regions of high attractant concentration. But how much? If I double the attractant concentration (c) in one area compared to another, does the bacterial population density (P) double? What is the relationship between c and P?

Remarkably, in retrospect, I had never thought about this question, and had never stumbled upon a discussion of it, until seeing it as an exercise in Bill Bialek’s recent book. As Bialek sketches, one can write a simple model of this process in which we state that the tumble rate (r) is a linear function of ∂c/∂t, the perceived rate of change of the concentration. Just given this, we can derive, analytically, the form of the steady-state bacterial concentration P(c(x)). (We consider just one dimension, x; I expect that the answer easily generalizes to higher dimensions.) I elaborated slightly and assigned this as a homework problem for my class. (If you’re interested, it’s #5 of this: Homework5: the original by Bialek is #49 of this: fromBialek_page154.)

The answer is amazing:

Screen Shot 2015-06-20 at 10.07.06 AMwhere Z is some normalization. In other words, the bacterial density follows a Boltzmann distribution, analogous to P = exp(-Energy/Temperature), with the local concentration playing the role of energy. The analog to the temperature in statistical mechanics is the proportionality between the tumble probability and ∂c/∂t. (I don’t think I can write a c with a dot on it here…)

It’s such a nice, succinct result! If we double the nutrient concentration, we increase the bacterial density by a factor of e^2 = 7.4.

In retrospect, I should perhaps not have been surprised, since:

  • the concentration gradient changes the likelihood that a right-moving bacterium becomes a left-moving one, or vice versa, and so is reminiscent of forward and backward reaction rates in chemical reactions, the equilibrium concentrations of which are given by a Boltzmann expression dependent on the free energy.
  • it seems that every distribution in nature is exponential, Gaussian, or Poisson, so we had a 1/3 chance of finding a Boltzmann distribution!

One can derive the above result analytically with some careful thinking and a bit of time. One can also simulate this chemotaxis model and validate the result with less thinking and more time. I didn’t assign the simulation to my students (though they had several other programming assignments), but I’ve done it myself for kicks.

Consider, for example, this parabolic concentration profile:

concentration_parabola_3umLet’s watch 200 bacteria moving (in 1D), responding to this c(x) as in the Bialek model, i.e. with a tumble rate that is linearly dependent on the perceived rate of change of c:

The movie is fun, but it’s clearer to look at the histogram of bacterial number, i.e. P(x), which the theory would predict is a Gaussian. It is, with the proper width and shape predicted by the model. The plot on the left shows the final (steady-state) P(x) together with exp(-beta c(x)), where beta = ∂r/∂\dot{c} and the plot on the right shows the evolution of P(x) with time.

bacteriaP_parabola_bothWe can make weirder concentration profiles, and again find that the simulated P(x) agrees with the analytic P(x) = exp(-beta c(x)):

bacteriaP_twostep_allMy MATLAB code for the simulation, which contains descriptions of the parameters, is here. (Note: there’s considerable room for improvement — take this as a sketch!)

What’s the moral of this story? Yet again, it shows us that random processes can be described by simple rules. It also gives us a simple way to potentially link observed bacterial spatial distributions with underlying chemical gradients, which we’re presently applying in the lab to some of our bacterial imaging experiments.

And the funders said, “Let there be funds,” and there were funds

The University of Oregon’s Research Development Services (RDS) sends a weekly email listing funding opportunities — grants, fellowships, and awards that we might be interested in. It’s a good thing to do, and each week there’s a nice variety of programs from many federal and private funding agencies. RDS might be a bit too permissive in its assessment of funding sources, though, since this week’s email includes:

Creation research snapshot 2015-05-27 at 1.33.48 PMYes, the “Creation Research Society” is just what you think it is.

Perhaps I should be appalled, or perhaps I should broaden my funding horizons. After all, how better to test parables of feeding the multitudes than with my lab’s three-dimensional imaging of lots of zebrafish? And can’t the formation of membranes from lipid molecules, which we study extensively, be seen as driven by divine forces, rather than secular “self-assembly?” The possibilities are endless.

How much is that physics major in the window?

A friend pointed me to a recent study looking at the earnings (salaries) of people with degrees in different majors — for example, what’s the average salary of a physics major? Or a history major? The authors of the study are economists, and in my opinion put forth in their exposition of the study an overly pecuniary view of the factors that go into choosing a major. Regardless of their utility, though, the data are interesting. The web site is here, and includes link to a summary and a 214 page full report. In case you’re curious, the median salary of 25-59 year-olds with undergraduate degrees in physics is $81,000, making it rank #15 out of the 137 majors listed. I’m surprised this is so high. Nine of the top ten majors are various flavors of engineering, with the other (#2) being pharmacy. Chemistry is #50 ($64k) and Biology is #74 ($56k). Depressingly, at the bottom of the list is Early Childhood Education ($39k) — something of immense importance, but that doesn’t pay well.

In addition to salary, the report looks at the popularity of various majors. I was curious whether these two are correlated — whether, for example, there’s a “supply curve” (thinking like an economist) such that the majors for which students are abundant are those for which the pay is less. (This would assume that a lot of other factors are equal across majors.) I can’t extract the number of people with each major from the report — at least not unless I put a lot of work into this, which won’t happen. However, I can easily extract the rankings for each (salary and popularity) and can plot these:

Major_each_Earnings_Pop_Rank(The colors indicate categories, detailed in the legend of the next graph.)

As you can see, it’s a cloud! There are popular, lucrative majors, unpopular, low-paying majors, and every other combination.

The 137 majors in the study are grouped into categories. If we plot the median salary (the actual amount, not the ranking) versus popularity (percentage of majors), we get the following:

Major_Cat_Earnings_PopAgain, there’s no trend evident. Stretching the horizontal axis with a semilogarithmic scale gives us a nice triangle of points (with one outlier):

Major_Cat_Earnings_Pop_semilogxIs there a lesson here, other than that there are a lot of business majors? I don’t know. I’ll leave that to you, dear reader.

Slammed! (Why do bacteria care about physics?)

EColi_big_5Apr2015_transparent_downsampleI was a participant (contestant?) in our local “Physics Slam” two weeks ago, in which half a dozen physics faculty gave 10 minute talks to the general public, with a “winner” chosen by about five judges selected from the audience. About 500 people came, filling an auditorium:

PhysicsSlam_299_4_08_15My talk, Why do bacteria care about physics?, was mostly about the surreal physics of small-scale fluid flows, which microbes have to deal with and which necessitate that bacteria can’t swim by, for example, waving appendages back and forth — a strategy that works well for fish, whales, and other large things. I did a live demonstration of the classic illustration of reversible flows using a big vat of corn syrup — this is one of my favorite demonstrations to do, and the crowd loved it, spontaneously cheering at the end. The whole talk went remarkably well — you can watch for yourself at http://media.uoregon.edu/channel/2015/04/09/2015-physics-slam/. My part is at 1:05 or so. I waved my arms a lot. Eric Corwin’s talk, on sand grains and sphere packing, is particularly good (0:38). (If you’ve never seen reversible flows, also check out this YouTube video by fluid dynamics giant G. I. Taylor.)

My graduate biophysics class has also been exploring the microscopic-scale physics of diffusion and flow. This week, we’ll get to the “low Reynolds number” issues reflected in the Physics Slam talk. Last week, we explored among other things the very non-intuitive ways in which diffusion-to-capture works. For example: Imagine a bacterium covered with “sticky” patches (receptors for nutrients, for example). The nutrients dissolved in the bacterium’s surroundings diffuse and, by chance, hit the patches, where they’re absorbed and “eaten.” Each patch is small, — say, 1 nm in radius like a typical protein — compared to the roughly 1 micron radius of the bacterium. The bacterium could cover itself entirely with adsorbing patches and maximize its food uptake, but it wouldn’t have any surface left for any other tasks — motility, secretion, sensing, etc. — so this would not be a good strategy to adopt. We, or the microbe, can ask: What fraction of the surface needs to be covered in absorbing patches for the total food uptake to be half as large as it would be if the entire surface were “sticky?” Naively, one might expect the answer to this to be 0.5 — half the surface should give half the food uptake. This is, however, totally wrong. Remarkably, one needs only about 0.3% of the surface to be “sticky” to provide a 50% efficiency for diffusive food capture.

A graph of the capture rate versus the area fraction covered by “sticky” patches looks like this (green curve):

patches_capture_rateor more clearly, on a semilog scale:

patches_capture_rate_semilogxThis remarkable result follows from the properties of diffusion. A simple derivation can be found in Howard Berg’s classic “Random Walks in Biology.” We can get a rough intuitive sense of how it arises by realizing that diffusive flows are driven by gradients of concentration. If we halve the radius of an absorbing patch, its area drops by a factor of four. However, the average flow lines of the diffusing particles, which must end at the patches, are squeezed into a tighter space, increasing the concentration gradient and thereby giving a greater flow rate that partially counteracts the large drop in area.

patchy sphere transparentThis is both amazing and, for bacteria, extremely useful. A handful of receptors are sufficient to to efficiently capture molecules from the surroundings, so there’s plenty of room for many types of receptors for many types of molecules (various nutrients, attractants, repellents, etc.).

The physics of the microscopic world are endlessly fascinating. Returning to the Physics Slam: As five of us predicted with high certainty beforehand, our astronomer Scott Fisher won — he started off with an astronomically-modified version of the intro to Prince’s Let’s Go Crazy, complete with music. Plus there were photos of colorful space things. The rest of us didn’t stand a chance.

You should appreciate the infrequency of my blog posts

fish_5April2015_transparentToday’s illustration doesn’t have anything to do with the topic below. I made it for a ten minute talk I’ll give tomorrow, at the local “Physics Slam.” You can see the program here. Short version: Six physics faculty will have ten minutes each to explain something. The audience votes on their favorite presentation. Apparently, when it was done last a few years ago, several hundred people came. We’ll see what happens this time! My title:

Why do bacteria care about physics?

At some point, I should practice…

Now on to today’s topic:

Everyone agrees that it’s impossible to keep up with the ever-expanding scientific literature. An interesting recent paper* takes a look at this phenomenon, verifying that the number of papers published every year is, indeed, growing exponentially:

* “Attention decay in science,” Pietro Della Briotta Parolo et al., http://arxiv.org/pdf/1503.01881v1.pdf

Parolo Fig 5The authors look at what this means for scientific “memory.” In general, the rate at which a paper is cited by later papers decays over time (after an initial peak), as it is forgotten or as it is gives rise to other works that are cited instead. One might guess that a growth in publication rate might correlate with a larger decay rate for citations — we spend less time with the past as we’re swamped by new stuff. This is indeed what Parolo et al. find: a decay rate that has steadily grown over decades. This is unfortunate: by not considering papers of the more distant past we risk needlessly re-discovering insights, and we disconnect ourselves from our fields’ pioneering perspectives.

Returning to the overall number of papers: I wonder if this terrifying growth is driven primarily by an increase in the number of scientists or by an increase in papers written per person. I suspect the former. Even within the US, there are a lot more scientists than there used to be [e.g. this graph]. In the developing world this increase is far more dramatic (see e.g. here), as (presumably) it should be.

Unfortunately, I can’t find any data on the total number of scientists worldwide — at least not with just a few minutes of searching — or even the total number of Ph.D.’s awarded each year.

Looking around for any data that might help illuminate trends of population and paper production, I stumbled upon historical data for the American Physical Society (APS), namely the number of members in each year, since 1905 (http://www.aps.org/membership/statistics/upload/historical-counts-14.pdf). It’s not hard to tabulate the total number of papers published each year in the Physical Review journals — the publications of the APS. Looking at how each of these change with time might give a rough sense of whether one tracks the other. Of course, there are a lot of problems with interpreting any correlation between these two things: APS members (like me) publish in all sorts of journals, not just APS ones; non-APS members publish in APS journals; etc. Still, let’s see what these two look like:

APS_membership_and_papersJust considering APS journals alone, the number of papers published each year is 10 times what it was a few decades ago! Within the microcosm of APS, the number of papers being published has been growing at a far faster rate than the membership.

What does all this mean? I don’t really know. It’s impossible to do something about the general complaint that there are too many papers to read unless we have some deeper understanding of why we’re in this state. Lacking that, I suppose we’re just stuck reading papers as best we can, or feeling guilty for not reading…

T-minus 9 days for my graduate biophysics course

urchin_Feb2015_transparentNext term, I’ll be teaching a brand-new graduate biophysics course. (It’s the first time teaching a graduate course in my eight years as a professor!) I’ve spent quite a while thinking of what should be in it and how the course should be structured. Here, I’ll just note my list of topics (below, with a few comments), and provide a link to the syllabus (here). Hopefully in weeks to come I’ll comment on how the course is going.


Introduction; Physics, statistics, and sight

What are the fundamental limits on vision, and how close does biology come to reaching them? (A brief look.)

Components of biological systems

What are the components of biological systems? What are the length, time, and energy scales that we’ll care about? How can we organize a large list of “parts?”

Probability and heredity (a quick look)

We’ll review concepts in probability and statistical mechanics. We’ll discuss a classic example of how a quantitative understanding of probability revealed how inheritance and mutation are related.

Random Walks

We can make sense of a remarkable array of biophysical processes, from the diffusion of molecules to the swimming strategies of bacteria to the conformations of biomolecules, by understanding the properties of random walks.

Life at Low Reynolds Number

We’ll figure out why bacteria swim, and why they don’t swim like whales.

Entropy, Energy, and Electrostatics

We’ll see how entropy governs electrostatics in water, the “melting” of DNA, phase transitions in membranes, and more.

Mechanics in the Cell

We’ll look more at the mechanical properties of DNA, membranes, and other cellular components, and also learn how we can measure them.

Circuits in the Cell

Cells sense their environment and perform computations using the data they collect. How can cells build switches, memory elements, and oscillators? What physical principles govern these circuits?

Multicellular organization and pattern formation

How does a collections of cells, in a developing embryo, for example, organize itself into a robust three-dimensional structure? We’re beginning to understand how multicellular organisms harness small-scale physical processes, such as diffusion, and large-scale processes, such as folding and buckling, to generate form. We’ll take a brief look at this.

Cool things everyone should be aware of

We live in an age in which we can shine a laser at particular neurons in a live animal to stimulate it, paste genes into a wide array of organisms, and sequence a genome given only a single cell. It would be tragic to be ignorant of these sorts of almost magical things, and they contain some nice physics as well!


As you’ve probably concluded, this is too much for a ten-week course! I will cull things as we go along, based on student input. I definitely want to spend some time on biological circuits, though, which I’m increasingly interested in. I also want to dip into the final topic of “cool things” — I find it remarkable and sad that so many physicists are unaware of fantastic developments like optogenetics, CRISPR, and high-throughput sequencing. Students: prepare to be amazed.

My sea urchin illustration above has nothing to do with the course, but if you’d like a puzzle: figure out what’s severely wrong with this picture.