How to lie with scaling

Occasionally, things go exactly as I’d hoped. We’re discussing scaling in my Physics of Life class, starting with things like the scaling of volume and area with size. I mentioned in passing that this issue comes up in advertising, and since students seemed interested, I brought the following to the next class — an interactive example adapted from Edward Tufte’s classic The Visual Display of Quantitative Information:

Inflation, the students hopefully know, refers to the change in purchasing power of a currency over time. Tufte shows a political ad in which the evils of Carter-era inflation are graphically depicted:

dollars (tufte) no valuesThe original has five different dollars, from Eisenhower to Carter, and also shows a number for the relative value of each, which I’ve erased in the image above.

I asked the students, “Just looking at the images: A dollar in the bottom year is worth X times as much as a dollar in the top year. What’s X?

The first three responses were 1/2, 1/3, and 1/4, so I made these the options for a clicker question for the whole class and then polled them. Here’s the outcome:


Two thirds of the class would assume, given the image, that the purchasing power of the Carter era dollar was ~1/4 that of the Eisenhower dollar — a very reasonable response. The true value:

dollars (tufte)0.44 (close to 1/2)!

So, I asked, were the makers of the ad being dishonest? The first few responding students guessed that the images were simply unrelated to the values, or that they were deliberately mis-scaled. I replied that there’s a way the makers of the ad could state that they were completely, perfectly honest. Then, a student cleverly suggested that the linear dimensions differ by 0.44. In other words, the length of the small dollar is 0.44 x the large one’s length, the width is 0.44 x the large one’s, and so the area is 0.44 x 0.44 = 0.19 x that of the large one! (You can measure the dollar images yourself and see that this is really the case.) So it’s a perfectly honest data visualization, but one that exploits scaling as well as the difficulty of accurately perceiving areas and lengths to manipulate the viewer. Watch out!

The 2014 Nobel Prizes: Switched at Birth?

superresolution_pixels_9Oct2014I was thrilled yesterday morning to learn that super-resolution microscopy is the subject of a Nobel Prize this year. (Or more accurately, that Eric Betzig, Stefan Hell, and William E. Moerner were awarded the Nobel Prize “for the development of super-resolved fluorescence microscopy.”) Super-resolution microscopy is wonderful, as I’ve written before. In all its various flavors, it uses clever optics and statistics to transcend the “diffraction barrier” and allow visualization of sub-100-nanometer structures, such as the inner architectures of cells.

What’s odd, though, is that this is this year’s Chemistry Nobel Prize. There’s nothing very chemical about it — yes, it involves molecules, but so does everything else. It is, of course, hard to define what Physics is, or what Chemistry is, or any other field is these days. Still, it seems fair to state that physicists look for phenomena or build tools that reflect general features of systems, rather than particular details. For super-resolution imaging, part of its beauty is its generality. Stimulated emission (the basis of Hell’s STED method) applies to all fluorescent molecules; the localization-based methods of Betzig and others apply to a very large class of switchable molecules. In addition, both methods make use of very general principles of optics, that again transcend particular details — it’s microscopy after all, a subset of physics!

In contrast, consider this year’s Physics prize, which went to the development of blue LEDs. Yes, it’s important, but is it physics? The principles involved don’t generally apply to semiconductors — that’s in fact why it was so hard to develop blue LEDs! The particular material type matters. To figure out semiconductor light emission requires detailed and difficult considerations of electronic structures in specific systems. This reminds me of… Chemistry!

My theory to reconcile all this is that the Nobel committee was considering two excellent topics, blue LEDs and super-resolution microscopy, and the different subcommittees crashed into one another in a Swedish hallway, scattering their files everywhere. Picking them up, the chemistry group got the physics files, and vice versa. At least the literature subcommittee wasn’t involved, as far as we know…

Preprint: “The Physics of Life”

Heart_1Oct2014For a while I’ve thought I should write up a paper on my biophysics-for-non-science-majors course, just to document what its motivations are and how I’ve approached teaching it, in case it helps spur others to create similar courses. I’ve finally done this; a pre-print is on arXiv here: (“The Physics of Life,” an undergraduate general education biophysics course).

I submitted it a few weeks ago to the American Journal of Physics, who immediately rejected it since they won’t publish papers on whole courses (just little pieces of courses). I will try sending it to Physics Education; it’s considerably longer than their usual article size, so even if they take it, it might get eviscerated. The arXiv version might be the sought-after director’s cut that captures the original vision!

If it’s rejected, my backup dissemination plan is to leave paper copies at bus stops and airport terminals.

In case anyone from my class this term sees this post and reads the paper: you’ll spoil lots of the surprises coming up, but I admire your investigative spirit!

Branching STEMs

I came across recently (via [1]) a neat interactive graph from the US census bureau illustrating the career paths that STEM majors take:

STEM career paths

One can click on particular categories of majors, revealing for example that more than half of engineering majors end up doing engineering, but that only about a tenth of physical science majors end up in physical sciences (image below). This of course highlights the importance of developing transferrable skills! (Note: I haven’t looked at all into the methodology behind the graph, what years the end-points for careers are tabulated at, etc. This would be interesting to do.)

STEM career paths -- physicsOf course, this point about transferrable skills applies to graduate students as well as undergraduates. It is (or should be) obvious to everyone that most graduate students will not end up in academic faculty positions — structurally, there are vastly more students than positions. In biology, less than 10% of Ph.D. students will become tenure-track faculty, as illustrated in this excellent graphic from the American Society for Cell Biology:


A sensible thing to do would be to explicitly declare that training for diverse outcomes a goal of graduate programs, and to convey to undergraduates that they should look for (and demand!) this in graduate programs. There’s an editorial in this week’s Nature that makes this point as well:

But instead of culling graduate students or abandoning the PhD, why not rebrand it? Rather than being a first rung on a ladder that ends with tenure-track professor (unless you tumble off), doctorates could be treated more like a trail that feeds through to a number of different paths (some easier, some harder, some even rather scary)

(There’s also a neat story profiling a few excellent scientists who left academia:

So what can one do about helping the development of transferrable skills? At a small scale, at my and Eric Corwin’s joint group meetings, we decided to try something new this summer. We’ve invited so far two physicists-outside-academia to join us by Skype for 20 minutes or so — i.e. not long enough to be too burdensome to them — to tell us about their career path and answer questions. One was my first Ph.D. student; another was a friend from graduate school. Both careers are centered on programming, at Microsoft and Stellar Science. Both sessions have been great — very interesting, and very useful. They’ve helped outline how to not only develop skills, but how to communicate that one has skills worth being paid for. The second of these challenges is, I think, as difficult as the first!

We’ll see if we come up with any other interesting activities…



The ice cream and the dead people


As in each of the past six years, I co-organized a Physics + Human Physiology day camp for 11th graders for a week in July, in which we explored wide swathes of science and also learned a bit about how college works. (It’s part of the SAIL umbrella of camps — see here and here.) It was fun; everything went smoothly, and the students loved the week. Some highlights:

ice cream and dead people

As you see, ours is a thematically very diverse camp. The ice cream refers to an activity led by Eric Corwin and his lab, in which they made liquid nitrogen ice cream, which allows one to discuss the microstructured phases of matter in foods, and also gives one the chance to make ice cream flavored with garlic, avocado, cream of mushroom soup, and other non-standard ingredients. (Mushrooms were apparently nightmare-inducing.) In past years, my lab has done other food-related activities — looking at things like mayonnaise under microscopes. We omitted these this year for lack of time, but perhaps we’ll revive them again. Or who knows, maybe we can make an entire food-related camp. The possibilities are endless — just today I learned about blowing bubbles in bread dough to reveal its physical properties (from Karen Guillemin’s farmer’s market blog).

While Eric made ice cream with half the kids, my lab and I took the rest and explored fluids of a different sort, doing activities with soap films and surface tension and then moving to my lab, where we looked at lipid membranes and also manipulated microparticles using laser traps. (The traps were certainly a hit. It is rather magical to move objects by shining light on them, and everyone got a turn!)

The “dead people” refers to a trip to the Human Physiology department’s anatomy lab, in which there are cadavers to examine. (I didn’t go this year, but I’ve gone in the past.) This is an intense activity — we prepare the kids beforehand and discuss aspects of the hour, but it is nonetheless sobering in its impact to see and feel bodies that were living people not long ago. It’s immensely educational, both from a humanistic and a scientific perspective — for the latter, one gets a great appreciation for the biomaterial feats a self-assembled collection of cells can accomplish.

This SAIL camp has always gone well, but the past two years have been especially enjoyable. I’m not sure why, but I think two things that have helped have been (1) a persistent focus on hands-on activities (as opposed to lectures or talks), and (2) leaving things a bit more unstructured than I naturally tend to. This year, for example, I decided to follow up an excellent show of physics demonstrations from Stan Micklavzina with a relatively free period in which, with just a few introductory words and some question-and-answer in the middle, students played with magnets, wires, speakers, and other things illustrating electromagnetic induction. This was hugely successful — the kids could have stayed with this for twice the allotted time. Connecting a battery to a dissected speaker coil and watching it jump, one of the girls exclaimed, ‘This is awesome!’ — a sentiment that seemed to be shared by many others. It’s easy to forget that most students have never had a chance to play with “toys” like these, and that doing so is fun and rewarding.

Ending also with the theme of fascinating outcomes from loosely structured processes: the picture at the top is a painting we made with the “pollockizer,” a device from Richard Taylor’s group (video here) that mimics the drip paintings of Jackson Pollock. I have a brief introduction to fractals, and Richard’s work exploring fractals and Pollock, and then Richard’s graduate students graciously set up the paint-and-pendulum system for us to explore.

I’ll end by noting that a lot of faculty and students in both the Physics and Human Physiology departments volunteered to do activities for the camp, in this and previous years, and it’s this generosity that has made the camp successful. If any of you who helped are reading this: Thanks!

My Kardashian index is…

…0.004 !

There is, these days, no shortage of metrics for quantifying scientists’ impact. Aside from simple counts of citations (i.e. how many times one’s research papers are cited by other papers), there’s the now ubiquitous h-index [1] that combines the number of papers one has published and the citations per paper, as well as a g-index, a-index, C-index, m quotient, and more [2,3]. Even aside from the ridiculous proliferation, I don’t really like these things — they can lead easily to the delusion that knowing a simple number might replace actual labor-intensive evaluation of the merits of one’s work. But finally, there’s an index I can believe in: In a fun-to-read letter in Genome Biology, Neil Hall proposes the Kardashian index, “a measure of discrepancy between a scientist’s social media profile and publication record based on the direct comparison of numbers of citations and Twitter followers. “

Hall writes:

“Consider Kim Kardashian; she comes from a privileged background and, despite having not achieved anything consequential in science, politics or the arts . . . she is one of the most followed people on Twitter and among the most searched-for person on Google.”

Hall notes that, in this age of social media, we should worry that there are scientists falling into the same mold. For kicks (it’s not a serious paper), he plots number of Twitter followers vs. total number of citations for 40 scientists.

There’s a trend, and he proposes considering deviations from it as a measure of disproportionate social media fame, or lack of fame. (Specifically, K = T / (43.3 * C^0.32), where K is the Kardashian-index, T is the number of Twitter followers, and C is the citation count.) A Kardashian-index much greater than one implies Kardashian-ness; a small Kardashian-index “suggests that a scientist is being undervalued.”

I’ve got two Twitter followers (one of whom I think is a spambot), and 1800 citations, giving me a K-index of 0.004. (If you’re wondering why I even have a Twitter account, see Having just submitted a paper for publication today, I’m hoping to drive my index still lower…

Update Aug. 18, 2014: The “1800 citations” is from Google Scholar. Web of Science just lists 1100, and has a more accurate list of my publications, so I’ll put more confidence in its count. This pushes my Kardashian Index up, unfortunately, to 0.005!


Viscosity in two dimensions

lipid bilayer

Continuing my trend of belatedly writing short descriptions of papers my group has published, this one came out in May, describing a new approach we developed for measuring the viscosity of lipid membranes:

“Measuring Lipid Membrane Viscosity Using Rotational and Translational Probe Diffusion,” Tristan T. Hormel, Sarah Q. Kurihara, M. Kathleen Brennan, Matthew C. Wozniak, and Raghuveer Parthasarathy, Phys. Rev. Lett. 112, 188101 (2014). [Link]

Viscosity is one of the most important material properties of any fluid, characterizing its resistance to flow (or more technically, its response to shear stresses). We’re intuitively familiar with viscosity, observing for example how warm honey flows much more easily than cold honey. For water, oils, and many other three-dimensional fluids, viscosity is well-characterized, tabulated in books and databases. For lipid bilayers, however, the two-molecule-thick liquids (illustrated above) that make up cellular membranes, viscosity is poorly quantified. It’s hard to measure, especially because lipid membranes are essentially two-dimensional fluids, whose flow behaviors differ quite dramatically from their three-dimensional counterparts. Understanding lipid viscosity is important for understanding how structures in membranes like protein clusters and cholesterol-rich “rafts” move, how proteins can alter the fluid properties of membranes, etc. In addition, it’s just embarrassing that the state of our understanding of the lipid bilayer, nature’s most important two-dimensional fluid, lags so far behind that of three-dimensional fluids.

We therefore set out to develop a new and better approach to quantifying lipid membrane viscosity. Our method hinges on Brownian motion: the random jiggling experienced by all objects due to ever-present thermal energy. As Einstein explained over a hundred years ago, the magnitude of the random motion of a particle in a liquid is a function of the liquid’s viscosity; the greater the viscosity, the lesser the motion, a relationship that holds in any number of dimensions. In fact, if one knows the size of the diffusing particle and the temperature, measuring the “diffusion coefficient” (which characterizes the random motion) is sufficient to extract the fluid’s viscosity. One can take this approach to measuring membrane viscosity, attaching particles to a membrane and watching their Brownian motion, but one runs into a problem: the effective size of the diffusing object may be different than the particle size. As illustrated below, one can imagine a variety of geometries for the particle-membrane linkage:

membrane linkages

On the left, the effective size of the diffusing object is bigger than the particle size; on the right, it’s smaller.

We realized that in addition to the “translational” Brownian motion of particles (i.e. their meandering position), one can examine their rotational motion: the orientation of particles also shows diffusive behavior that also depends on particle size and fluid viscosity. Measuring both translational and rotational diffusion allows one to determine both the effective particle size and the viscosity. We came up with a way of making paired spherical tracer particles to link to membranes…

paired tracers

We can image these with fluorescence microscopy, and the pairs allow us to visualize their orientation:

Using this approach, we were able to measure lipid bilayer viscosity. Moreover, we were able to study what happens when a protein that’s involved in membrane deformation interacts with the lipid bilayer, discovering that it dramatically increases the two-dimensional viscosity — the first time such an effect has been reported.

We were very happy with how this project turned out. It was also gratifying to see that others liked it — it got chosen for a “synopsis” from the American Physical Society, one of just six for the week, and was also featured in a “research highlights” blurb in Nature Chemical Biology (?!).  By a great coincidence, a paper on granular materials from my neighbor Eric Corwin also got a synopsis in the same week!

Tristan Hormel, the first author on our paper, is a graduate student in my lab, working now on a very different (and better?!) way of revealing fluid properties of lipid membranes. Sarah Kurihara, the second author, was an excellent UO undergrad biology major; she’s now doing fascinating things as a Peace Corps volunteer in Lesotho. Her blog is here: (I recommend it.) Katy Brennan and Matt Wozniak were great summer undergrads in the lab, here as part of a REU (Research Experiences for Undergraduates) program.

As mentioned, we’re continuing to explore the fascinating fluid dynamics of lipid membranes. We’re also using particle motions to examine viscosity in other contexts, for example inside fish guts (really), which I’ll hopefully write about in the future.