Then we played bones, and I’m yelling domino

On some days, being a physics professor is a real slog — many-hour meetings, struggles with grant funding, endless emails, and near-futile attempts to eke out time for actually thinking about science. But there are other days in which it’s really awesome. Today, for example, I got to cart around an elephant femur. Here it is in my office, with the kids for scale:

kids and elephant femur

Why do I have an elephant femur in my office? In my “Physics of Life Class” (i.e. biophysics for non-science majors; description here), we’re discussing biomechanics and bone size — why big animals need disproportionately wide bones compared to smaller ones. I’ve illustrated this before with pictures of animal skeletons, but I learned recently that we have on campus an actual elephant skeleton. The elephant was named Tusko. He worked in a circus about a hundred years ago, and had a sad life — he’s been referred to as “the world’s most chain-bound elephant.” To learn more about Tusko and how he posthumously ended up at Oregon, see http://cas.uoregon.edu/2014/02/the-elephant-in-the-room/.  Thanks to Edward Davis, I was able to borrow Tusko’s femur, and cart it across campus to class. Thanks to Samantha Hopkins, I also had a dog femur.

One gets a lot of stares pushing a cart with an elephant femur. Random students:

students and elephant femurKyle Lynch-Karup, a teaching assistant for the course:

kyle elephant femur 24Apr2014 IMG_0062

A brief summary of the physics: The elephant’s femur isn’t just a proportionately larger version of the dog’s. It’s about 10 times longer, but nearly 20 times wider. Why? Leg bones have to support the weight of the animal, which is proportional to its volume, which scales as length to the third power. (Think of a cube: it’s volume equals the length of a side, cubed.) The strength of a bone, however, is proportional to its cross-sectional area, which scales as length-squared. (Think of a square.) So as we imagine enlarging a small animal, its weight increases much more than its bone strength, if we keep its proportions the same. To counteract this, large animals have disproportionately wide (and hence large-area) bones. For the experts: the really neat thing is that behaviorally similar animals like antelope, wildebeest, etc., have bone diameters that scale as length^1.5, exactly the form for which bone strength and weight have the same scaling with length.  We worked through all this in class; the very general message of a lot of what we do is that scaling arguments are powerful.

I’m increasingly fond of having students work through worksheets in class, in small groups as I wander around commenting and helping; here’s today’s: Bones_allometry_and_mechanical_similarity  It works much better than lecturing!

So: I got to play with an elephant femur and teach people about allometric scaling. I also didn’t have any meetings, I went to a neat session of our science teaching journal club, and I attended an interesting seminar on quantum measurement. Today was a good day. I didn’t get to work on an image analysis puzzle I’ve been dying to spend time on, but that’s why I’m lifting the post title from Ice Cube rather than Lou Reed.

 

I should think of a title involving the words “Small” and “Microscopy”

Our Physics Department Colloquium this week is on a topic I’m fond of: the analysis of super-resolution microscopy images. This occurrence isn’t surprising, since I invited the speaker, Alex Small, with whom I co-wrote a recent review paper on the subject.

The problem that superresolution microscopy confronts is that it’s hard to see tiny things. Specifically, a microscope can’t resolve objects that are closer together than roughly half the wavelength of light (a few hundred nanometers) — they’ll just appear as a blur. Since the 19th century, we’ve known that this is a “fundamental” limit on optical imaging. This frustrates, for example, anyone looking at cells, since many subcellular structures are considerably smaller than a few hundred nanometers.

Localization based superresolution microscopy (there are other superresolution methods as well) gets around this limit in a clever way. Imagine that you have a room full of people that you can’t see, but that the room is laced with directional microphones. Randomly, someone shouts; using the microphones and some complicated analysis, you find where that voice (probably) came from. Again, someone shouts; you again find that person. And so on, until you gather a good set of information about where each person is.  You can even call this an “image.”  For superresolution microscopy, we do this with light. From our review:

Figure 1 -- superresolution imaging

Figure 1 from this paper:  Schematic illustration of localization-based superresolution imaging. (a) A hypothetical object with spatial structure at scales smaller than the wavelength of light. Orange circles indicate fluorophores [i.e. things emitting light]. (b) Conventional fluorescence imaging of the structure in panel a, with diffraction making the fine structure unresolvable. (c) Fluorescence imaging of a stochastically activated subset of the fluorophores. [In other words, a few people yelling -- following our analogy above.]  (d) Image analysis revealing the positions of the fluorophores in panel c. (e) Repeated imaging of sparse subsets of fluorophores, ideally yielding the positions of all the fluorophores and thereby providing superresolution imaging of the object. In panels b and c, pixelation and noise are not depicted. The scale bar is half the wavelength of the imaged light.

This was first proposed by Eric Betzig in the mid 1990s, and was implemented by a few groups (including Betzig’s) in 2006.

Though it sounds simple, going from [c] to [d] in the above figure is challenging. Really, one’s camera image of single molecules looks like this, blurry, noisy, and pixelated:

Figure 2

 

Figure 2: Simulated CCD image of three fluorophores (wavelength 530 nm, scale 100 nm/pixel, and N ≈ 400 photons). Orange circles indicate the true fluorophore positions. Blue lines show 5 × 5 pixel regions of interest centered at the three brightest local intensity maxima.  [In other words: the actual single molecules are at the orange dots; this gives an image that looks like the gray one shown.  From just the gray image, could you guess where the orange dots are?  How accurately?]

How do we determine the location of the molecule that gives the above image? How accurately can this be done? The first question is one that I’ve explored (and that if I had more time, I’d explore more…); the second is the subject of our review paper, and also a few other recent review papers [link1, link2].

Super-resolution microscopy has attracted a lot of attention in recent years. It’s fascinating that it’s an imaging technique that doesn’t give an image, but rather that yields a set of estimates of point positions, from which the experimenter has the task of constructing a statistically valid representation of the underlying object. This construction isn’t trivial, and it gets even more challenging if one wants to answer questions like “are these 10 molecules in a cluster, or 10 glimpses of the same molecule?”

Of course no imaging technique reveals an object as it “really” is, but rather reflects some imperfect flow of information from a source, through a measuring device, and to a detector, all of which deform and distort the signal in complex ways. But with localization-based superresolution imaging, the complexity of the connection between object and image is especially evident.

Grabbing graduate students with graphs

http://physics.uoregon.edu/profiles/faculty/How does a department recruit graduate students? Like many physics departments, ours brings accepted prospective students to visit, funneling most of them into two days during which we try to convey information about our research, the university, the area, etc. Faculty in different research areas think of ways to spend an hour or so describing their fields. One common approach is to give talks or presentations. Imagine a day full of these, and you can imagine how it can be a grueling experience for the students, and an ineffective way to convey information that students might retain. (In general, we know these days, lecturing is not ideal for learning.*)

Can we do something different and better? Something active? The goals are to convey a sense of what research we do, to highlight themes that span a diverse set of faculty research interests, and to keep students awake and engaged (at 2-3pm — near the end of a packed day).

For the “complex systems” wing of our department, which spans about 9 faculty interested in things as different as biophysics and magnetic materials, a few of us** implemented the following activity:

Connecting research groups.  Each faculty provided two “snippets;” each snippet consists of a figure from a paper along with its caption, the paper title, and author list. We divided students into a few groups, gave each of the groups all the snippets, and instructed them to find commonalities of theme, authors, methods, whatever they can imagine, that link one faculty member to another. The task was to construct a graph (in the nodes & edges sense) that spans all faculty. Even better: to construct a cyclic graph with two edges per node.

This sounds rather abstract, but was remarkably fun. (Two of us tried it out beforehand, discovering its fun-ness.) The exercise lasted about 20 minutes, not counting our discussion of it afterwards, during which students were talking, were engaged, and were clearly poring over the methods and topics illustrated by the snippets to absorb what they were and what they implied about approaches to physics. We (faculty) left them alone for a while, and then chatted with the students to see what they were thinking, answer questions, and offer advice. (They didn’t need much help.) Two of the three group’s graphs are shown above, in the image at the top of this post. (The green edges are one group’s connections; the brown are the other’s.)  The third group’s is included in the image at the bottom of the post; I suggested to them about 5 minutes in that they stick to two edges per faculty, which is why some, but not all, of the nodes are rather prickly. (These images are my own re-copyings of their graphs, which were all done on separate pieces of paper.) We then had the students explain their reasoning, and elaborated on various concepts that arose. It was lively.

History. This activity has as its origin a neat paper discussed in our science teaching journal club (http://w.lifescied.org/content/12/4/628.short) on classifying objects (e.g. superheros), the categorization of which illuminates naive vs. deeper understandings of concepts. (I adopted this recently for my Physics of Life class, with the subject of physical mechanisms by which animals avoid sinking in water, but that’s another story.) I thought of an activity in which students, given some information about each of our research interests, construct categories that cover groups. Eric Corwin proposed the graph / connections idea, which is much better, and then he, Ben McMorran, Benjamin Aleman, and I fleshed out its implementation.

graphs (3)

* See e.g. http://web.mit.edu/jbelcher/www/TEALref/Wieman_Change_2007.pdf, or http://blogs.cornell.edu/cte/2012/12/03/carl-wieman-taking-a-scientific-approach-to-the-teaching-and-learning-of-science/ for a video version

** Eric Corwin, Ben McMorran, Benjamín Alemán, and me

Konstructing a poster

I’ve been reading bits and pieces of Geometry of Design, by Kimberly Elam, which I found randomly on a shelf in our Art and Architecture library. The book has many great examples of design and composition, and thoughts on the wonders of golden rectangles, pentagrams, and other shapes. It devotes a few pages to this excellent poster by Jan Tschichold from an exhibition of constructivist art:

Jan Tschichold poster

It’s beautifully clean, conveys information, and draws the eye to the prominent “setting sun.” One can get a sense of how neat, and non-obvious, the arrangement is by flipping it upside down, which looks awful:

Jan Tschichold poster, upside down

There’s a striking asymmetry in how we look at images — if I stare at the upside down poster, I find that my eye “wants” to move left-to-right, top-to-bottom, but is thwarted by the elements at the upper left. (Coincidentally, we spent part of my Physics of Life class today exploring our anatomical left-right asymmetries and their origins — a fun story for another time.)

I was thinking about this since our hosts-and-microbes systems biology center is organizing a symposium for this summer. The symposium looks like it will be great, and it already has a neat poster made by a postdoc in the center, with images and text describing the point of the meeting, speakers, registration information, etc. I started wondering what a more minimal, “modern” (in the historical sense) poster would look like. Cutting and pasting and lifting a flagellum from an old drawing, I tried to modify the Tschichold poster, but in a “upward and rightward” sense — unlike the waning days of constructivism, we’re in the waxing days of studying host-microbe systems. I came up with this:

My constructivist bacteria poster

The symposium title isn’t quit correct in typeface or in content (it’s actually”modeling our microbial selves”), and I obviously didn’t bother typing in the correct participant list. (Sadly, Piet Mondrian won’t be showing up.) I rather like it. It’s too uninformative to fly these days, but we can use it when inventing alternate histories of scientific research…

The $60,000 graduate student

How much does a graduate student cost? Short answer: close to $60k per year. Long answer:

Several times in the past few weeks, the topic of graduate student cost has come up. The “real” cost of a graduate student in the sciences, i.e. the money that a grant has to provide to support a graduate student doing Ph.D. research, is considerably higher than the money that the student sees — the latter is about $24k / year, and the former is now close to $60k / year. I can’t remember ever seeing an illustration of what the pieces of the 60k are, and how they’ve changed in recent years, so I thought I’d put together a graph from my own lab’s budget data.

At Oregon and elsewhere, there are four categories that make up the overall cost:

  • Salary — the actual pay of the graduate student researcher
  • Fringe benefits — health insurance, fees, etc. Insurance is the biggest piece of this.
  • Indirect costs — For every dollar of grant money received by the researcher, the university (like all universities) receives money from the granting agency, at some negotiated rate that is supposed to account for building costs, maintenance, etc. At Oregon, this indirect cost rate is presently 0.45, meaning that of every $1.45 budgeted in an NSF grant, $1.00 goes to the research, and $0.45 goes to the university as indirect costs. (This rate is not unusual.)
  • Tuition — The University charges tuition for graduate students.  (Indirect costs are not charged on tuition.)

Notes:

  • I didn’t extensively mine the numbers, and only picked four years since the 2006-2007 academic year (when I started at the University of Oregon) for which it was easy to extract the above components.
  • The stated numbers would be similar at other U.S. Universities. At some places, tuition is higher; at some places, tuition for advanced graduate students is zero.

Here are the graphs:

Graduate student cost at the University of Oregon, PhysicsThe graph on the right is in inflation-adjusted 2010 dollars. There isn’t much of an increase over the past six years, though there is a recent rise of a few thousand dollars, driven largely by ever-increasing tuition costs.

I’ll leave further interpretation of the graphs to the reader. It is interesting to note that the UO administration has a goal of increasing the number of graduate students here. That’s an expensive task, and less than half the expense is the actual salary of the students! One could support twice as many students per grant if tuition and indirect costs were eliminated, but that would bring problems of its own…

It would be great to have data extending further into the past, but this seems hard to gather.

Culling the (science) herd?

I came across a short article at Science’s news site that notes that “Up to 1000 NIH Investigators Dropped Out Last Year” — i.e. the number of investigators funded by the NIH is presently dropping, a likely consequence of shrinking funding. The article includes this graph:

NIH number of investigatorsWhat I find striking about the graph is the large rise in the number of NIH funded scientists over the past few decades, which isn’t commented on at all. I quickly made a plot of US population over the same period, scaled to match (i.e. the 1970 values are lined up, and the vertical range is 2.5x this value):

NIH number of investigators, with US population

This isn’t news, but it’s yet again interesting to note that the number of scientists has grown disproportionately compared to the overall population.

I find it remarkable that calls for increasing science funding seem to be driven by a sensible desire for more (and more stable) funding per research lab, but seem oblivious to the fact that increased funding creates strong incentives for hiring. As we’ve seen in the recent past, consequences can take the form of less funding on average per lab, greater uncertainty, less risk-taking in scientific proposals, etc. Why is “population control” so rarely discussed?

Could dark matter be less boring than I thought?

I learned from a colleague today that recent astrophysical observations may provide another line of evidence for the existence of dark matter — the almost totally inert “stuff” that that, from indirect inferences, seems to make up most of the mass of the universe. Despite the fact that the nature of dark matter is considered one of the big mysteries in physics, and the fact that I’m a physicist, I don’t really care.*

Why? An illustration: A few years ago I read The Golden Compass. (Yes, it’s a kid’s book. All the Proust novels were checked out.) Dark matter plays a key role in it, as a substance that links wildly different universes, and other things. Reading it, it struck me as sad that the real dark matter, whatever it is, won’t be nearly as interesting. It will be some particle, with some mass, and maybe another property or two to be tabulated in some particle data book. It will couple to gravity, but that’s about it. That’s why it’s “dark,” and why it is therefore guaranteed to be fundamentally dull. (I hold the “condensed matter physicists” view that everything interesting, from iron atoms conspiring to make magnetism to lipids working together to make membranes, comes about because of interactions.)

But: Another colleague pointed out a new paper that puts forth the idea that gravitational interactions with dark matter perturb the cloud of comets around the solar system, driving the bombardments that are probably responsible for the (roughly) periodic mass extinctions on Earth. When I was in college, I read Richard Muller’s fascinating book Nemesis, about his hypothesis that the sun has a dim companion star that periodically nudges comets and, as above, drives extinctions. Muller and others never found such a star, and the whole idea, highly speculative, has never really gone anywhere. The dark matter idea is even more speculative, and, in fact, my colleague who pointed it out was mocking it.

Still, if there’s any chance at all of a meaningful connection between dark matter and the extinction of dinosaurs, I’m happy about it!

* Fun fact: my first paper even mentions dark matter in the galaxy! [http://scitation.aip.org/content/aapt/journal/ajp/66/9/10.1119/1.18956]