Our Physics Department Colloquium this week is on a topic I’m fond of: the analysis of super-resolution microscopy images. This occurrence isn’t surprising, since I invited the speaker, Alex Small, with whom I co-wrote a recent review paper on the subject.
The problem that superresolution microscopy confronts is that it’s hard to see tiny things. Specifically, a microscope can’t resolve objects that are closer together than roughly half the wavelength of light (a few hundred nanometers) — they’ll just appear as a blur. Since the 19th century, we’ve known that this is a “fundamental” limit on optical imaging. This frustrates, for example, anyone looking at cells, since many subcellular structures are considerably smaller than a few hundred nanometers.
Localization based superresolution microscopy (there are other superresolution methods as well) gets around this limit in a clever way. Imagine that you have a room full of people that you can’t see, but that the room is laced with directional microphones. Randomly, someone shouts; using the microphones and some complicated analysis, you find where that voice (probably) came from. Again, someone shouts; you again find that person. And so on, until you gather a good set of information about where each person is. You can even call this an “image.” For superresolution microscopy, we do this with light. From our review:
Figure 1 from this paper:
This was first proposed by Eric Betzig in the mid 1990s, and was implemented by a few groups (including Betzig’s) in 2006.
Though it sounds simple, going from [c] to [d] in the above figure is challenging. Really, one’s camera image of single molecules looks like this, blurry, noisy, and pixelated:
Figure 2: Simulated CCD image of three fluorophores (wavelength 530 nm, scale 100 nm/pixel, and N ≈ 400 photons). Orange circles indicate the true fluorophore positions. Blue lines show 5 × 5 pixel regions of interest centered at the three brightest local intensity maxima. [In other words: the actual single molecules are at the orange dots; this gives an image that looks like the gray one shown. From just the gray image, could you guess where the orange dots are? How accurately?]
How do we determine the location of the molecule that gives the above image? How accurately can this be done? The first question is one that I’ve explored (and that if I had more time, I’d explore more…); the second is the subject of our review paper, and also a few other recent review papers [link1, link2].
Super-resolution microscopy has attracted a lot of attention in recent years. It’s fascinating that it’s an imaging technique that doesn’t give an image, but rather that yields a set of estimates of point positions, from which the experimenter has the task of constructing a statistically valid representation of the underlying object. This construction isn’t trivial, and it gets even more challenging if one wants to answer questions like “are these 10 molecules in a cluster, or 10 glimpses of the same molecule?”
Of course no imaging technique reveals an object as it “really” is, but rather reflects some imperfect flow of information from a source, through a measuring device, and to a detector, all of which deform and distort the signal in complex ways. But with localization-based superresolution imaging, the complexity of the connection between object and image is especially evident.