Seeing the smell of rotten eggs

JACS_cover_and_RPsubmissionI’m a bit behind in writing summaries of recently published papers from my group. Here’s one that’s a few months old — I’m spurred to write now since I just learned two days ago that it got onto the cover of JACS, the flagship journal of the American Chemical Society:

M. D. Hammers, M. J. Taormina, M. M. Cerda, L. A. Montoya, D. T. Seidenkranz, R. Parthasarathy, M. D. Pluth, “A Bright Fluorescent Probe for H2S Enables Analyte-Responsive, 3D Imaging in Live Zebrafish Using Light Sheet Fluorescence Microscopy.” J. Am. Chem. Soc. 137: 10216-10223 (2015), [link].

The cover image is the one on the left, above. (More on that in a moment.) The paper is primarily from the lab of my colleague, Mike Pluth, a remarkable organic chemist here at Oregon. Mike’s group devised a new fluorescent reporter of hydrogen sulfide (H2S) — i.e. it becomes fluorescent when it binds H2S. We’d probably all recognize H2S by its characteristic rotten egg smell. (Thankfully, my lab has never had enough of it around to notice!) There’s an increasing interest in detecting and studying hydrogen sulfide in living organisms, since it’s used by cells as a signaling molecule to regulate various physiological processes. It’s also produced by various bacterial species, and so could give insights into microbial activity.

Chemical reporters of H2S, however, have tended to be hard to use, highly toxic, or both. The Pluth lab’s new molecule is sensitive and looked like it would be amenable for use in live organisms. Since my lab does a lot of three-dimensional microscopy of larval zebrafish, we took on the task of imaging this reporter in vivo, seeing if we could detect its signal inside the larval gut. “We” is really Mike Taormina, a very skilled postdoc in my lab. This required, of course, getting the reporter molecules into the gut, which Mike did by the amazing method of microgavage — carefully inserting a fine capillary into the mouth of a larval fish, injecting the contents, and removing the capillary without damage to the fish. (A larval zebrafish is about 0.5 mm wide x a few mm long, so this procedure has to be done under a microscope.) We used light sheet fluorescence microscopy (which I’ve written about before) to image the reporter molecules, and to determine that they are properly localized in the gut. The optical sectioning capabilities of light sheet microscopy turn out to be very useful in distinguishing the gut reporter signal from the abundant background fluorescence of the zebrafish. We had hoped to detect intrinsic H2S, but the levels were insufficient for this study. Instead, we also gavaged H2S donor molecules, and detected their presence. This may seem a bit silly — detecting the very molecules we ourselves put in — but it allowed quantitative measures of sensitivity, and most importantly showed that all this could be done inside a live animal without any apparent toxicity.

In addition to showcasing the Pluth Lab’s remarkable chemical creations, the project ties into my lab’s interests in imaging not only physical processes and biological components of gut ecosystems, but also chemical activity.

After our manuscript was accepted by JACS, it was picked as an “Editor’s Choice,” and we were asked if we’d like to propose a cover image. I rather quickly painted this one as a possibility:

Parthasarathy_Pluth_Cover_Illustration_June2015_smallJACS suggested revising it, hence the version at the top right. It’s not great, but I rather like the fish. Usually what happens with cover art submissions is that they’re either accepted or rejected. This time, oddly, JACS was keen on having its own cover artist make a cover, which they did, but incorporating the fish from my submission as part of it. It’s a bit strange, and I have to say I’m not thrilled by the resulting cover (maybe just because I have a low tolerance for gradient shading). But still, it’s nice to have some publicity for our ability to see the smell of rotten eggs!

On the replication crisis in science and the twigs in my backyard

K_compass_Sept2015A long post, in which you’ll have to slog or scroll through several paragraphs to get to the real question: can we navigate using fallen sticks?

These days we seem to be inundated with deeply flawed scientific papers, often featuring shaky conclusions boldly drawn from noisy data, results that can’t be replicated, or both. I was reminded of this several times over the past few days: (i) A group published an impressive large-scale attempt to replicate the findings reported in 100 recent psychology studies , recovering the “significant” findings of the original papers only about a third of the time [1]. (ii) A colleague sent me a link to an appalling paper claiming to uncover epigenetic signatures of trauma among Holocaust survivors; it pins major conclusions on noisy data from small numbers of people, with the added benefit of lots of freedom in data analysis methods. Of course, it attracted the popular press. (iii) I learned from Andrew Gelman’s blog, where it was roundly criticized, of a silly study involving the discovery that “sadness impaired color perception along the blue-yellow color axis” (i.e. feeling “blue” alters your perception of the color blue). (The post is worth reading.)

Of course, doing science is extremely difficult, and it’s easy to make mistakes. (I’ve certainly made large ones, and will undoubtedly make more in the future.) What seems to characterize many of the sorts of studies exemplified above, though, is not technical errors or experimental mis-steps, but a more profound lack of understanding of what data are, and how we can gain insights from measurements.

Responding to a statement on Andrew Gelman’s blog, “Nowhere does [the author] consider [the possibility] that the original study was capitalizing on chance and in fact never represented any general pattern in any population,” I wrote:

I’m very often struck by this when reading terrible papers. … Don’t people realize that noise exists? After asking myself this a lot, I’ve concluded that the answer is no, at least at the intuitive level that is necessary to do meaningful science. This points to a failure in how we train students in the sciences. (Or at least, the not-very-quantitative sciences, which actually are quantitative, though students don’t want to hear that.)

If I measured the angle that ten twigs on the sidewalk make with North, plot this versus the length of the twigs, and fit a line to it, I wouldn’t get a slope of zero. This is obvious, but I increasingly suspect that it isn’t obvious to many people. What’s worse, if I have some “theory” of twig orientation versus length, and some freedom to pick how many twigs I examine, and some more freedom to prune (sorry) outliers, I’m pretty sure I can show that this slope is “significantly different” from zero. I suspect that most of the people we rail against in this blog have never done an exercise like this, and have also never done the sort of quantitative lab exercises that one does repeatedly in the “hard” sciences, and hence they never absorb an intuition for noise, sample sizes, etc. (Feel free to correct me if you disagree.) This “sense” should be a prerequisite for adopting any statistical toolkit. If it isn’t, delusion and nonsense are the result.

It occurred to me that it would be fun to actually try this! (The twig experiment, that is.) So my six-year-old son and I wandered the backyard and measured the length and orientation of twigs on the ground. I couldn’t really give a good answer to his question of why we were doing this; I said I wanted to make a graph, and since I’m always making graphs, this satisfied him. This was a nicely blind study — he selected the twigs, so we weren’t influenced by preconceptions of the results I might want to find. We investigated 10 sticks.

Here’s a typical photo:

crop Photo Sep 06, 3 23 59 PMThis particular twig points about 70 degrees west of North (i.e. it lies along 110- 290 degrees).

What’s the relationship between the orientation of a twig and its length? Here’s the graph, with all angles in the range [-90,90] degrees, with 0 being North:

twig_orientation_northThe slope isn’t zero, but rather 1.5 ± 2.3 degrees/inch. (It’s almost unchanged with the longest stick removed, by the way.)

The choice of North as the reference angle is arbitrary — perhaps instead of asking if the shorter or longer sticks differentially prefer NW/SE vs NE/SW, as this analysis does, I should pick a different reference angle. Perhaps a 45 degree reference angle would be sensible, since N/S and E/W orientations are nicely mapped onto positive and negative orientation values. Or perhaps I should account for the 15 degree difference between magnetic and true North in Oregon. Let’s pick a -65 degree reference angle (i.e. measuring the twig orientation relative to a direction 65 degrees West of North). Here’s the graph:

twig_orientation_m65Great! Now the slope is -7.0 ± 3.4 degrees/inch. The p-value* is 0.01.** I didn’t even have to eliminate data points, or collect more until the relationship became “significant.”

Clearly the data indicate a deep and previously undiscovered relationship between the length of twigs and the orientation they adopt relative to the geographic landscape, perhaps indicating a magnetic character to wood that couples to the local magnetic and gravitational fields. Or that’s all utter nonsense.

Having done this, I’m now even more convinced that analyzing “noise” is an entertaining thing to do — it would make a great exercise in a statistics class, coupled with an essay-type assignment examining its procedures and outcomes.

Today’s illustration (at the top of the post) isn’t mine; it’s by my 10-year-old, and it coincidentally shows the cardinal directions. (We’ve been playing around a bit with compass-and-ruler drawings.)

* I find it hard to understand how one makes a p-value for a linear regression slope. I did it by brute force, simulating data drawn from a null relationship between orientation and length and counting the fraction of instances with a slope greater than the observed value.

** The astute reader asks, “shouldn’t you apply some sort of multiple comparisons test?” Sure, but how many comparisons did I make?

[1] Open Science Collaboration, Estimating the reproducibility of psychological. Science. 349, aac4716 (2015).

Review times revisited

horse leg muscles -- rpTwo posts ago, I wondered about how long the average peer-review of a journal article takes to write. Most people I know reported “a few hours” as the average time, with the upper end of the range being a day or two. I emailed several journals — mostly ones that I’ve reviewed papers for or published in during the past year — asking whether they’ve collected data on how much time reviewers spend reviewing. Of eight journals, three replied. Of these, only one had actual data!

The Optical Society of America (which publishes Optics Express and other journals) very nicely wrote:

… there was a survey taken in 2010 of 800 responses from OSA authors. We have the following numbers that varied across the board:

2-5 hours at 37%

6-10 hours at 29%

11+ hours at 22%

I don’t know why the numbers don’t add up to 100%. Perhaps 12% were <2 hours? If so, the median time would be about 5 or 6 hours.

It would have been nice to get more data on this but perhaps, as a colleague of mine cynically noted, journals don’t want to know how much free labor they’re asking people to provide! (I don’t think this is really the case.)

Now I should get back to the review I’m presently working on — I’m at 3 hours so far, and I feel compelled to re-plot the authors’ data to clarify various issues… (They nicely provide it in table form, and I’m fond of making graphs…)

(Today’s illustration: ‘the external view of the left fore leg of the horse,’ which I sketched from a sketch in “Animal Painting and Anatomy” by W. Frank Calderon — an odd book, which apparently defines “Animal” as “horse, dog, cow, or sometimes lion.”)

On fungi and fabrics

RP -- mushroomA recent article in Physical Review Letters reports on “self-propelled droplet removal” from fibers — the authors designed hydrophobic fibers with the property that when water droplets grow and coalesce on them, the energy released by the coalescence flings the drops off the fibers. The underlying phenomenon is one we’ve all seen: two water droplets, on a window for example, will rapidly merge into one when they come into contact since the one large droplet has less surface area, and therefore less interfacial energy, than two small droplets. The merger is very fast, driven by the large amount of energy associated with an air-water interface being transformed into the kinetic energy of the water. Here, this kinetic energy is sufficient to fling the drop away.

The paper is neat. As noted in the synopsis in Physics, the effect of droplet removal has been seen before in other material contexts, such as planar surfaces. Zhang & colleagues show with experiments and simulation that the high curvature of the fibers causes droplets to fling themselves much more easily than is possible on flat surfaces.

The reason I’m writing about this, though, is that there’s a very nice biophysical connection that isn’t mentioned in the paper. A practical use for surface-tension-mediated launching of droplets has been around for much, much longer than any man-made technology: it’s a mechanism by which many fungi scatter their spores.

In these “basidiomycete” fungi, fluid accumulates at the base of the hydrophobic fungal spores. When the growing droplet reaches the more hydrophilic “sterigma,” it suddenly wets it; this flings the droplet off and the spore goes along for the ride. (It’s somewhat surprising that the droplet doesn’t de-wet the spore, I suppose.) There’s a discussion of this, along with impressive images from high-speed video, in a 2005 paper from Anne Pringle and colleagues , from whom I learned of this remarkable phenomenon. (See the references there for citations of earlier papers, especially work from JCR Turner in the 1990s, on the physics of how the ballistospores work, and decades-old papers on fungal behaviors. There’s also more recent work on this, e.g. here, which looks neat, but which I haven’t read.) The droplets, by the way, are small: a few microns in radius.

The fungi launch their spores at a few meters per second. Can we make sense of this speed? It’s a great candidate for dimensional analysis. (I’ll pause while you think about what the relevant variables that determine the velocity are likely to be…)

We’d expect that the launch speed of a droplet depends on its radius, the density of the liquid, and the interfacial energy, or surface tension. (Surface tension has dimensions of Force / Length.) There’s only one combination of these variables that gives dimensions of velocity; I’ll leave it to the reader to work it out, since dimensional analysis is wonderfully entertaining. (If you’re rusty on dimensional analysis, see here.) You should find that the speed is greater for smaller drops (as you might have expected). If you imagine, as is usually the case, that dimensionless constants are roughly 1, and use “typical” numbers of 1000 kg/m^3 for the density of water, 0.07 N/m for the surface tension of an air-water interface, and 10 microns for the drop radius, you should estimate a speed on the order of 1 meter per second — perfect!

In the future, my reviews will consist solely of one carefully picked emoji

pattern illustration rpAug2015There’s an interesting question about peer-review of journal articles that I’ve never seen addressed: How long does it take to review a paper? I don’t mean the three weeks or so between getting a request and submitting a review, but rather the time spent actually reviewing. In other words, how many hours does a reviewer spend reviewing the paper and not doing anything else? For myself I would guess that this is 2-3 hours for a typical paper, which includes reading and re-reading as well as writing the actual assessment. (Some papers have been considerably more time-consuming than this; few have been less.)

The question was triggered by a comment from Andrew Gelman (who writes a very good statistics blog), who states that his reviews take 15 minutes each. (!) My reviews are certainly longer than most people’s, but I find it inconceivable that I could do anything meaningful in 15 minutes. The handful of people I’ve talked to report a time of a few hours per paper. I would think that journals might have a good estimate of this, from surveys of reviewers perhaps, since it would be useful for them to know how much labor they are asking their reviewers to provide (for free). However: I asked a journal editor friend of mine, who replied that they haven’t collected any data on reviewer’s time-per-review! (I don’t know about other journals.)

Journals complain (e.g. here) that it’s hard to find willing reviewers. Reviewers complain that they get too many requests to review papers. Authors complain that reviewers are slow and capricious. Everyone complains that peer review is “broken.” Perhaps a better tabulation of what time and effort peer review requires would help address all this!

If you, dear reader, would like to comment on:

(i) how long it takes you to review a paper

(ii) how many requests to review manuscripts you get

(iii) what fraction of requests to review you accept

(iv) what field you’re in

that would be great! I’ll see if any conclusions emerge…Also, of course, it would be great to know if someone else has already done this, with the outcome available to read.

(Today’s illustration: from a card the kids and I made a few days ago.)

On second thought, don’t ask worms for directions

CElegans_questionIn my last post, I wrote about a remarkable recent paper reporting that C. elegans, the well-studied nematode worm, can sense magnetic fields. In a series of elegant experiments, researchers at UT Austin showed that C. elegans moves at a particular preferred angle to an applied field. Moreover, that angle matches the angle between the Earth’s magnetic field and the vertical at the place the worms are from, suggesting that the worms can use the field to navigate “up” or “down.” But:

My colleague Spencer Chang cleverly realized that there’s a puzzle here: orienting at some angle θ relative to the magnetic field will just pfield_and_coneoint the worm somewhere along a cone centered on the field direction (see illustration, left). This cone touches the vertical, but only at one particular “azimuthal” angle. If we want to move along the vertical, knowing θ is not enough — the worm would also need some information about the azimuthal angle, and it’s hard to imagine what that information could be.

I’m somewhat embarrassed that I didn’t realize this when reading the paper. (I imagined this cone then, but blithely didn’t think more about it, assuming that somehow the cone “averages out” in the worm’s search. This is wrong.)

lines_and_coneThinking further, it’s even worse than it seems: if I calculate the average of the angle θ’ between a vector on the cone and the vertical (see figure, left), it’s greater than θ! (That is, <cos(θ’)> = cos^2(θ), so <θ’> is greater than θ, where <> indicates an average over the azimuthal angle. I’ll leave it to the reader to check my math.) Therefore moving randomly on a cone of angle θ to the field is a worse strategy than simply moving along the field direction, if one wants to be close to the vertical.

It seems like one of the following must be true of the worms’ field-assisted navigation:

  1. The worms must have a (separate) mechanism for determining the azimuthal angle. (It’s hard to imagine this.)
  2. The worms are really bad at orienting vertically. (Notably, they are moving one- or two-dimensionally in the experiments, so actual 3D vertical orientation wasn’t tested.)
  3. The worms are not actually trying to orient vertically, so it’s fine that they do a bad job of it.
  4. The worms have some additional mechanism for sensing up and down, perhaps one that (also) isn’t very accurate, and this in conjunction with the magnetic field sensing allows them to orient. One might suspect gravity, but the authors of this paper show that gravitational sensing seems to be absent.

In search of insights, I emailed the corresponding author, Jon Pierce-Shimomura, who very nicely wrote back. He’s certainly aware of how puzzling the worms’ behavior is. Not surprisingly, he advocates for #4 as being likely — I’d pick it also — and suggests that pressure or humidity might give other cues. Mysteries remain, but they’re amenable to further clever experiments, many of which are underway. Obviously, it would be great to watch the worms’ motion in a fully 3D environment, in which their paths, the field direction, and the “cone” of angles between the vertical and the field could all be known, to infer what their strategies for navigation are. This need not even be done live — I imagine that if one could map the “tunnels” the worms have dug through a gel, one could infer their orientational tendencies. Who knew that a creature with 1000 cells could offer such puzzles?

Worm Positioning Systems

CElegansIt’s great to find a scientific paper that reports something really new — something interesting whose very existence as a substance or behavior or phenomenon was unexpected. It’s doubly great if the paper itself is clear, thorough, and convincing. And it’s almost too much to ask, in addition, for for the topic to be biophysical — something that illustrates the connections between living organisms and physical forces or mechanisms. All of these stars aligned a few weeks ago when I came across the following paper in eLife:

Magnetosensitive neurons mediate geomagnetic orientation in Caenorhabditis elegans (

C. elegans is a soil-dwelling roundworm. It’s an immensely popular model organism and has been intensely studied for decades. It was the first multicellular organism to have its genome sequenced, the connectivity between each of its few hundred neurons is known, and the pattern of divisions that give rise to each cell in its body have been thoroughly mapped. One would think that every aspect of its sensory capabilities would have been noticed and remarked upon by now. But no: no one had looked at whether it can navigate using magnetic fields. The authors of the paper noted that this may be worth investigating: in real life, these worms burrow through soil, and might need a way to distinguish up from down that local magnetic fields might provide. Moreover, the mechanisms by which other animals sense magnetic fields (various birds, sea turtles, and others) remain quite mysterious, so finding this ability in an experimentally tractable creature would be useful.

The authors constructed simple, elegant experiments monitoring the direction C. elegans travel under various applied magnetic fields. (I recommend reading the paper itself, but here’s a summary of the findings.) They discovered that magnetic fields strongly guide the worms, and more strikingly, that the worms do not travel along the field vector, but rather at an angle to the field that corresponds to the angle between the local magnetic field direction and the vertical in Bristol, England, where the organisms are from. C. elegans from Australia traveled preferentially at a nearly opposite angle to the field as their British counterparts, corresponding to the nearly opposite field angle down under. Specimens from all over the globe migrate at angles in accord with their local field, strongly implying that they can use the magnetic field to distinguish up and down.

In itself, these experiments and measurements would be wonderful, indicating a previously unknown “sense” in these animals. The authors went even further, however, and screening various mutants they were able to identify particular sensory neurons that are necessary for the magnetic field sensing, and even visualize (with calcium imaging) these neurons “lighting up” when magnetic fields were applied!

What exactly these neurons are (physically) doing is a mystery. Apparently, they have lots of rod-like villi at their end, and one might imagine that subtle motions or deflections induced by magnetic fields trigger the activation of membrane channels, rather similarly to the mechanism behind hearing. What the motions and deflections are, and what materials transduce them, would be fascinating to uncover.

In all, it’s a wonderful paper, and one of my favorites that I’ve read in the past year. The only sour note struck is not in the article itself, but in the university (UT Austin) press release about the work, which notes that it “might open up the possibility of manipulating magnetic fields to protect agricultural crops from harmful pests” — apparently even the most elegant and insightful science needs a ridiculous comment about “practical” applications.

Note added: See the next post for more on this topic!