# On fungi and fabrics

A recent article in Physical Review Letters reports on “self-propelled droplet removal” from fibers — the authors designed hydrophobic fibers with the property that when water droplets grow and coalesce on them, the energy released by the coalescence flings the drops off the fibers. The underlying phenomenon is one we’ve all seen: two water droplets, on a window for example, will rapidly merge into one when they come into contact since the one large droplet has less surface area, and therefore less interfacial energy, than two small droplets. The merger is very fast, driven by the large amount of energy associated with an air-water interface being transformed into the kinetic energy of the water. Here, this kinetic energy is sufficient to fling the drop away.

The paper is neat. As noted in the synopsis in Physics, the effect of droplet removal has been seen before in other material contexts, such as planar surfaces. Zhang & colleagues show with experiments and simulation that the high curvature of the fibers causes droplets to fling themselves much more easily than is possible on flat surfaces.

The reason I’m writing about this, though, is that there’s a very nice biophysical connection that isn’t mentioned in the paper. A practical use for surface-tension-mediated launching of droplets has been around for much, much longer than any man-made technology: it’s a mechanism by which many fungi scatter their spores.

In these “basidiomycete” fungi, fluid accumulates at the base of the hydrophobic fungal spores. When the growing droplet reaches the more hydrophilic “sterigma,” it suddenly wets it; this flings the droplet off and the spore goes along for the ride. (It’s somewhat surprising that the droplet doesn’t de-wet the spore, I suppose.) There’s a discussion of this, along with impressive images from high-speed video, in a 2005 paper from Anne Pringle and colleagues , from whom I learned of this remarkable phenomenon. (See the references there for citations of earlier papers, especially work from JCR Turner in the 1990s, on the physics of how the ballistospores work, and decades-old papers on fungal behaviors. There’s also more recent work on this, e.g. here, which looks neat, but which I haven’t read.) The droplets, by the way, are small: a few microns in radius.

The fungi launch their spores at a few meters per second. Can we make sense of this speed? It’s a great candidate for dimensional analysis. (I’ll pause while you think about what the relevant variables that determine the velocity are likely to be…)

We’d expect that the launch speed of a droplet depends on its radius, the density of the liquid, and the interfacial energy, or surface tension. (Surface tension has dimensions of Force / Length.) There’s only one combination of these variables that gives dimensions of velocity; I’ll leave it to the reader to work it out, since dimensional analysis is wonderfully entertaining. (If you’re rusty on dimensional analysis, see here.) You should find that the speed is greater for smaller drops (as you might have expected). If you imagine, as is usually the case, that dimensionless constants are roughly 1, and use “typical” numbers of 1000 kg/m^3 for the density of water, 0.07 N/m for the surface tension of an air-water interface, and 10 microns for the drop radius, you should estimate a speed on the order of 1 meter per second — perfect!

# In the future, my reviews will consist solely of one carefully picked emoji

There’s an interesting question about peer-review of journal articles that I’ve never seen addressed: How long does it take to review a paper? I don’t mean the three weeks or so between getting a request and submitting a review, but rather the time spent actually reviewing. In other words, how many hours does a reviewer spend reviewing the paper and not doing anything else? For myself I would guess that this is 2-3 hours for a typical paper, which includes reading and re-reading as well as writing the actual assessment. (Some papers have been considerably more time-consuming than this; few have been less.)

The question was triggered by a comment from Andrew Gelman (who writes a very good statistics blog), who states that his reviews take 15 minutes each. (!) My reviews are certainly longer than most people’s, but I find it inconceivable that I could do anything meaningful in 15 minutes. The handful of people I’ve talked to report a time of a few hours per paper. I would think that journals might have a good estimate of this, from surveys of reviewers perhaps, since it would be useful for them to know how much labor they are asking their reviewers to provide (for free). However: I asked a journal editor friend of mine, who replied that they haven’t collected any data on reviewer’s time-per-review! (I don’t know about other journals.)

Journals complain (e.g. here) that it’s hard to find willing reviewers. Reviewers complain that they get too many requests to review papers. Authors complain that reviewers are slow and capricious. Everyone complains that peer review is “broken.” Perhaps a better tabulation of what time and effort peer review requires would help address all this!

If you, dear reader, would like to comment on:

(i) how long it takes you to review a paper

(ii) how many requests to review manuscripts you get

(iii) what fraction of requests to review you accept

(iv) what field you’re in

that would be great! I’ll see if any conclusions emerge…Also, of course, it would be great to know if someone else has already done this, with the outcome available to read.

(Today’s illustration: from a card the kids and I made a few days ago.)

# On second thought, don’t ask worms for directions

In my last post, I wrote about a remarkable recent paper reporting that C. elegans, the well-studied nematode worm, can sense magnetic fields. In a series of elegant experiments, researchers at UT Austin showed that C. elegans moves at a particular preferred angle to an applied field. Moreover, that angle matches the angle between the Earth’s magnetic field and the vertical at the place the worms are from, suggesting that the worms can use the field to navigate “up” or “down.” But:

My colleague Spencer Chang cleverly realized that there’s a puzzle here: orienting at some angle θ relative to the magnetic field will just point the worm somewhere along a cone centered on the field direction (see illustration, left). This cone touches the vertical, but only at one particular “azimuthal” angle. If we want to move along the vertical, knowing θ is not enough — the worm would also need some information about the azimuthal angle, and it’s hard to imagine what that information could be.

I’m somewhat embarrassed that I didn’t realize this when reading the paper. (I imagined this cone then, but blithely didn’t think more about it, assuming that somehow the cone “averages out” in the worm’s search. This is wrong.)

Thinking further, it’s even worse than it seems: if I calculate the average of the angle θ’ between a vector on the cone and the vertical (see figure, left), it’s greater than θ! (That is, <cos(θ’)> = cos^2(θ), so <θ’> is greater than θ, where <> indicates an average over the azimuthal angle. I’ll leave it to the reader to check my math.) Therefore moving randomly on a cone of angle θ to the field is a worse strategy than simply moving along the field direction, if one wants to be close to the vertical.

It seems like one of the following must be true of the worms’ field-assisted navigation:

1. The worms must have a (separate) mechanism for determining the azimuthal angle. (It’s hard to imagine this.)
2. The worms are really bad at orienting vertically. (Notably, they are moving one- or two-dimensionally in the experiments, so actual 3D vertical orientation wasn’t tested.)
3. The worms are not actually trying to orient vertically, so it’s fine that they do a bad job of it.
4. The worms have some additional mechanism for sensing up and down, perhaps one that (also) isn’t very accurate, and this in conjunction with the magnetic field sensing allows them to orient. One might suspect gravity, but the authors of this paper show that gravitational sensing seems to be absent.

In search of insights, I emailed the corresponding author, Jon Pierce-Shimomura, who very nicely wrote back. He’s certainly aware of how puzzling the worms’ behavior is. Not surprisingly, he advocates for #4 as being likely — I’d pick it also — and suggests that pressure or humidity might give other cues. Mysteries remain, but they’re amenable to further clever experiments, many of which are underway. Obviously, it would be great to watch the worms’ motion in a fully 3D environment, in which their paths, the field direction, and the “cone” of angles between the vertical and the field could all be known, to infer what their strategies for navigation are. This need not even be done live — I imagine that if one could map the “tunnels” the worms have dug through a gel, one could infer their orientational tendencies. Who knew that a creature with 1000 cells could offer such puzzles?

# Worm Positioning Systems

It’s great to find a scientific paper that reports something really new — something interesting whose very existence as a substance or behavior or phenomenon was unexpected. It’s doubly great if the paper itself is clear, thorough, and convincing. And it’s almost too much to ask, in addition, for for the topic to be biophysical — something that illustrates the connections between living organisms and physical forces or mechanisms. All of these stars aligned a few weeks ago when I came across the following paper in eLife:

C. elegans is a soil-dwelling roundworm. It’s an immensely popular model organism and has been intensely studied for decades. It was the first multicellular organism to have its genome sequenced, the connectivity between each of its few hundred neurons is known, and the pattern of divisions that give rise to each cell in its body have been thoroughly mapped. One would think that every aspect of its sensory capabilities would have been noticed and remarked upon by now. But no: no one had looked at whether it can navigate using magnetic fields. The authors of the paper noted that this may be worth investigating: in real life, these worms burrow through soil, and might need a way to distinguish up from down that local magnetic fields might provide. Moreover, the mechanisms by which other animals sense magnetic fields (various birds, sea turtles, and others) remain quite mysterious, so finding this ability in an experimentally tractable creature would be useful.

The authors constructed simple, elegant experiments monitoring the direction C. elegans travel under various applied magnetic fields. (I recommend reading the paper itself, but here’s a summary of the findings.) They discovered that magnetic fields strongly guide the worms, and more strikingly, that the worms do not travel along the field vector, but rather at an angle to the field that corresponds to the angle between the local magnetic field direction and the vertical in Bristol, England, where the organisms are from. C. elegans from Australia traveled preferentially at a nearly opposite angle to the field as their British counterparts, corresponding to the nearly opposite field angle down under. Specimens from all over the globe migrate at angles in accord with their local field, strongly implying that they can use the magnetic field to distinguish up and down.

In itself, these experiments and measurements would be wonderful, indicating a previously unknown “sense” in these animals. The authors went even further, however, and screening various mutants they were able to identify particular sensory neurons that are necessary for the magnetic field sensing, and even visualize (with calcium imaging) these neurons “lighting up” when magnetic fields were applied!

What exactly these neurons are (physically) doing is a mystery. Apparently, they have lots of rod-like villi at their end, and one might imagine that subtle motions or deflections induced by magnetic fields trigger the activation of membrane channels, rather similarly to the mechanism behind hearing. What the motions and deflections are, and what materials transduce them, would be fascinating to uncover.

In all, it’s a wonderful paper, and one of my favorites that I’ve read in the past year. The only sour note struck is not in the article itself, but in the university (UT Austin) press release about the work, which notes that it “might open up the possibility of manipulating magnetic fields to protect agricultural crops from harmful pests” — apparently even the most elegant and insightful science needs a ridiculous comment about “practical” applications.

Note added: See the next post for more on this topic!

# Bacteria and Boltzmann

(A more technical post than most, in which we look at bacterial population densities, and watch a movie.)

This term I taught a graduate biophysics course, about which I’ll write a recap later. Bacteria came up a lot in the course, both because they’re important but also because they offer many nice illustrations of how physical principles of diffusion, hydrodynamics, and stochasticity govern how life works. A benefit of teaching a graduate course is that I learn things, too! I’ll describe here something about chemotaxis that I wasn’t previously aware of.

Motile bacteria will migrate towards higher concentrations of attractants (e.g. nutrients). Bacteria like E. coli move in a sequence of roughly straight-line, constant velocity “runs”…

… separated by “tumbles” at which they randomize their direction:

If the bacterium senses that the nutrient concentration is increasing with time, the probability per unit time of tumbling decreases (and so, on average, the run length increases). In other words, if the bacterium is moving in a “good” direction, it’s more likely to continue in that direction. All this well known, nicely described, for example in Howard Berg’s classic book, which I first read at least a decade ago (2000?).

The new (to me) part: Given a bunch of bacteria, it’s clear that the process described above will increase the bacterial concentration at regions of high attractant concentration. But how much? If I double the attractant concentration (c) in one area compared to another, does the bacterial population density (P) double? What is the relationship between c and P?

Remarkably, in retrospect, I had never thought about this question, and had never stumbled upon a discussion of it, until seeing it as an exercise in Bill Bialek’s recent book. As Bialek sketches, one can write a simple model of this process in which we state that the tumble rate (r) is a linear function of ∂c/∂t, the perceived rate of change of the concentration. Just given this, we can derive, analytically, the form of the steady-state bacterial concentration P(c(x)). (We consider just one dimension, x; I expect that the answer easily generalizes to higher dimensions.) I elaborated slightly and assigned this as a homework problem for my class. (If you’re interested, it’s #5 of this: Homework5: the original by Bialek is #49 of this: fromBialek_page154.)

where Z is some normalization. In other words, the bacterial density follows a Boltzmann distribution, analogous to P = exp(-Energy/Temperature), with the local concentration playing the role of energy. The analog to the temperature in statistical mechanics is the proportionality between the tumble probability and ∂c/∂t. (I don’t think I can write a c with a dot on it here…)

It’s such a nice, succinct result! If we double the nutrient concentration, we increase the bacterial density by a factor of e^2 = 7.4.

In retrospect, I should perhaps not have been surprised, since:

• the concentration gradient changes the likelihood that a right-moving bacterium becomes a left-moving one, or vice versa, and so is reminiscent of forward and backward reaction rates in chemical reactions, the equilibrium concentrations of which are given by a Boltzmann expression dependent on the free energy.
• it seems that every distribution in nature is exponential, Gaussian, or Poisson, so we had a 1/3 chance of finding a Boltzmann distribution!

One can derive the above result analytically with some careful thinking and a bit of time. One can also simulate this chemotaxis model and validate the result with less thinking and more time. I didn’t assign the simulation to my students (though they had several other programming assignments), but I’ve done it myself for kicks.

Consider, for example, this parabolic concentration profile:

Let’s watch 200 bacteria moving (in 1D), responding to this c(x) as in the Bialek model, i.e. with a tumble rate that is linearly dependent on the perceived rate of change of c:

The movie is fun, but it’s clearer to look at the histogram of bacterial number, i.e. P(x), which the theory would predict is a Gaussian. It is, with the proper width and shape predicted by the model. The plot on the left shows the final (steady-state) P(x) together with exp(-beta c(x)), where beta = ∂r/∂\dot{c} and the plot on the right shows the evolution of P(x) with time.

We can make weirder concentration profiles, and again find that the simulated P(x) agrees with the analytic P(x) = exp(-beta c(x)):

My MATLAB code for the simulation, which contains descriptions of the parameters, is here. (Note: there’s considerable room for improvement — take this as a sketch!)

What’s the moral of this story? Yet again, it shows us that random processes can be described by simple rules. It also gives us a simple way to potentially link observed bacterial spatial distributions with underlying chemical gradients, which we’re presently applying in the lab to some of our bacterial imaging experiments.

# And the funders said, “Let there be funds,” and there were funds

The University of Oregon’s Research Development Services (RDS) sends a weekly email listing funding opportunities — grants, fellowships, and awards that we might be interested in. It’s a good thing to do, and each week there’s a nice variety of programs from many federal and private funding agencies. RDS might be a bit too permissive in its assessment of funding sources, though, since this week’s email includes:

Yes, the “Creation Research Society” is just what you think it is.

Perhaps I should be appalled, or perhaps I should broaden my funding horizons. After all, how better to test parables of feeding the multitudes than with my lab’s three-dimensional imaging of lots of zebrafish? And can’t the formation of membranes from lipid molecules, which we study extensively, be seen as driven by divine forces, rather than secular “self-assembly?” The possibilities are endless.

# How much is that physics major in the window?

A friend pointed me to a recent study looking at the earnings (salaries) of people with degrees in different majors — for example, what’s the average salary of a physics major? Or a history major? The authors of the study are economists, and in my opinion put forth in their exposition of the study an overly pecuniary view of the factors that go into choosing a major. Regardless of their utility, though, the data are interesting. The web site is here, and includes link to a summary and a 214 page full report. In case you’re curious, the median salary of 25-59 year-olds with undergraduate degrees in physics is $81,000, making it rank #15 out of the 137 majors listed. I’m surprised this is so high. Nine of the top ten majors are various flavors of engineering, with the other (#2) being pharmacy. Chemistry is #50 ($64k) and Biology is #74 ($56k). Depressingly, at the bottom of the list is Early Childhood Education ($39k) — something of immense importance, but that doesn’t pay well.

In addition to salary, the report looks at the popularity of various majors. I was curious whether these two are correlated — whether, for example, there’s a “supply curve” (thinking like an economist) such that the majors for which students are abundant are those for which the pay is less. (This would assume that a lot of other factors are equal across majors.) I can’t extract the number of people with each major from the report — at least not unless I put a lot of work into this, which won’t happen. However, I can easily extract the rankings for each (salary and popularity) and can plot these:

(The colors indicate categories, detailed in the legend of the next graph.)

As you can see, it’s a cloud! There are popular, lucrative majors, unpopular, low-paying majors, and every other combination.

The 137 majors in the study are grouped into categories. If we plot the median salary (the actual amount, not the ranking) versus popularity (percentage of majors), we get the following:

Again, there’s no trend evident. Stretching the horizontal axis with a semilogarithmic scale gives us a nice triangle of points (with one outlier):

Is there a lesson here, other than that there are a lot of business majors? I don’t know. I’ll leave that to you, dear reader.