to BioTechniques free email alert service to receive content updates.
Neuroscience’s Block Party

10/30/2013
Amy R. Volpert

Refinements in electron microscopy techniques are allowing researchers to examine the "connectedness" of neurons. Amy Volpert takes a look at how electron microscopy is changing our image of brain wiring. Learn more...


Ironically, the brain—the organ of reason and analysis—presents one of the biggest challenges when it comes to functional and anatomical studies. With billions of neurons, which have axons and dendrites extending from the cell bodies in many different directions, the structural complexity of the human brain in unrivaled. Compounding the problem is the ability of neurons to associate and re-associate with each other to form neural networks and circuits, where signals are passed from neuron to neuron in an organized fashion.


Surface rendering of a neostriatal spiny neuron reconstructed from an SBEM volume.
Image courtesy of Mark H. Ellisman, NCMIR.
This complexity makes it difficult to combine structural and functional data, according to Kevin Briggman, an investigator at the National Institute of Neurological Disorders and Stroke at the NIH. “Usually when people record functionally from large populations of neurons, they’re only recording from the cell bodies of neurons. If you record a signal from only the cell body of a neuron, you don’t know the shape of the rest of the neuron and, more importantly, the synaptic connections contributing to the generation of the signal.”

Even in a small animal like a zebrafish, neural connections tend to be quite complex. And identifying individual neurons, along with their connections, is much more difficult in mammals than in simpler organisms. “In invertebrate species that’s not really a problem because each neuron is roughly in the same place in every fly or every leech or every C. elegans. As soon as you get into vertebrates, especially mammals, that’s mostly no longer true,” explains Briggman.

Still, cutting-edge electron microscopy techniques are starting to provide clearer images of neuron structure in a large brain volume, while genetically encoded probes could provide the often elusive link between structure and function, leading to the question, Just how far can we go in mapping our human brain?

New kid on the block

Serial block-face electron microscopy (SBEM) has emerged in recent years as an important tool for imaging brain tissue down to the intracellular level. In SBEM, the sample is embedded, treated with metal ions, and placed on an automated ultramicrotome in the vacuum chamber of a scanning electron microscope. The setup is often referred to as a “Denkotome” in honor of Winfried Denk from the Max Planck Institute for Medical Research in Heidelberg, who first adapted the technique to brain imaging (1).

SBEM derives its name from the fact that the sample (the “block”) surface (the “face”) is imaged directly by EM, following which a section is removed from the surface, and the block is then re-imaged. Although the section is lost, preventing further imaging, there is less distortion of tissue features on the block face, which results in high-resolution images that can be used for 3D reconstruction of cellular and subcellular organization. This technique is particularly useful in imaging whole neurons, including axons and dendrites, in their biological context, enabling the visualization of interactions between neurons and other cells.

An alternative approach is the automatic tape-collecting lathe ultra-microtome (ATUM; previously ATLUM) developed by Jeff Lichtman and colleagues at Harvard University (2). The ATUM slices off very thin sections of tissue that are then transferred onto a transparent tape as they fall. Although this allows for staining and re-imaging of the cut sections, the distortion of tissue features can be a major drawback. “With a block face technique, you basically get one shot to take a good picture, and then you shave that slice off and it’s gone forever. So that’s the advantage of [pre-cutting]. The disadvantage of cutting all of the sections ahead of time is that it can introduce section distortions that can lead to alignment problems after the fact. And with the block face technique, because you image the block face before you cut, the images are intrinsically aligned,” explains Briggman.

The colorful world of sample prep and labeling


Cerebellar granule cell neuron reconstructed from an SBEM volume. The nucleus is shown in orange, plasma membrane in purple, mitochondria in yellow, and lysosomes in red.
Image courtesy of Mark H. Ellisman, NCMIR.

Sample preparation is as important as the hardware for generating good data with SBEM or the ATUM approach. Mark Ellisman, director of the National Center for Microscopy and Imaging Research (NCMIR), and colleagues are developing and refining techniques using heavy metals to stain biological samples. Here, the metal ions penetrate the tissue and provide the necessary contrast to distinguish cell features. But recent work has gone beyond simple heavy metal treatment, with researchers (including Ellisman) designing new molecular probes for EM.

In the past, specific targeting of proteins or small molecules could only be achieved through the use of conventional light microscopy, most often by employing fluorescent protein labels or tags. The ability to use multiple labels on a single sample is important for generating the multicolor depictions of neural connections that attract so much attention—a feature that has been lacking for EM studies. “We would be quite happy with two [colors] right now. Even if we could get one or two in there to specifically label proteins of interest,” notes Briggman in regards to EM studies.

But a new generation of genetically encoded probes for EM applications is making progress on that score, according to Ellisman (3,4). “The thing that we’ve done which will put real legs on the technology is to, in collaboration with Roger Tsien’s group [at University of California at San Diego] and Alice Ting [at Massachusetts Institute of Technology], develop a set of genetically introducible labels that are then operated on in a certain way, either with light or enzymatically.” Using such probes, Ellisman and his colleagues have already been able to image multiple tags in a single sample with EM.

It takes a robust algorithm (or a crowd)

From a data storage perspective, EM images are big. And when it comes to SBEM, where the block is imaged repeatedly to enable 3D reconstruction, a lot of data is generated. “An average volume that we acquire is between 2 and 10 terabytes,” says Ellisman, quickly adding that dataset acquisition can take anywhere from 3–4 days for a modest volume to 2–3 weeks. From there, scientists have to then struggle with how best to extract key objects of interest from these volumes.

Another major challenge when it comes to data analysis is making correlations between datasets. “That’s an open challenge in my view right now, how do you integrate future datasets into a dataset in which you have function and structure together?” says Ellisman. According to Briggman the initial challenge, before you can even get to the “fun part” of analyzing which neurons are connected to one another, is to segment out what is in fact a neuron from the dataset.

Briggman and his colleagues try to address these issues from the start of a project. “Our strategy to combine functional and structural datasets is to put everything into a computational modeling framework, so that as you acquire dataset after dataset you can try to fit the data into that framework.”

As for parsing the imaging data, it turns out that computers aren’t always perfect, according to Briggman. “Image analysis has been the main challenge; basically different labs are pursuing different analysis techniques. Most of us are using some form of machine learning. But even the best machine learning algorithms don’t have low enough error rates, so we still need humans in the loop to help us error correct the output of a neural network, for example.” Others in the field have taken a similar approach, turning to a large group of students, or even crowdsourcing, to process data.

All of these developments are setting the stage for what Briggman thinks would be the ideal brain imaging experiment. “The ideal experiment is we take both the functional data and the anatomical data from the exact same animal. While the animal’s still alive, we record from a large fraction of the neurons for as long as we can. Then we fix it and use the serial block face technique to reveal the synaptic connections between all neurons. Anatomical landmarks like the vasculature for example, allow us to then align the two functional and structural datasets.”

And this type of experiment is beginning to pop up in the scientific literature. In 2011, Briggman, along with Denk and other collaborators, reported in the journal Nature the combination of two-photon laser scanning microscopy with SBEM to reveal the mechanism of direction-selectivity in the mouse retina (5). In August, the researchers followed up this study with a complete connectomic reconstruction of the inner plexiform layer of the mouse retina (6). Using SBEM along with manual annotation and machine learning algorithms, the team was able to reconstruct 950 neurons and their connections. The work represents a strong validation of the SBEM approach for neural analysis.

Possible impact on the Brain Initiative

Advances in SBEM could not be occurring at a more opportune moment with the launch of the Brain Initiative, a federally funded project championed by President Obama aimed at functionally mapping the human brain. But, according to Briggman, whose research involves both functional and structural whole-brain analysis, the Brain Initiative will hopefully put equal emphasize on the structural imaging of the brain. Ellisman does see a role for SBEM in this type of effort. “You can do correlated light microscopy and then this kind of electron microscopy to nest the two scales of data. So this is the main opportunity now, particularly in the context of this Brain Initiative.”

The recent improvements in SBEM technology are making the technique more attractive to other researchers, driving Ellisman and his colleagues at the NCMIR core imaging facility to provide imaging services (or collaborations), technical information, and SBEM datasets to the broader research community.

“There are even people beginning to utilize these techniques in the context of neurodegenerative disorders and model systems,” Ellisman notes. The block party might just be starting.

References

1. Denk W and Horstmann H. 2004. Serial block-face scanning electron microscopy to reconstruct three-dimensional tissue nanostructure. PLoS Biol. 2(11):e329.

2. Kasthuri N and Lichtman J.W. 2010. Neurocartography. Neuropsychopharmacol. 35:342–343.

3. Martell JD, Deerinck TJ, Sancak Y, Poulos TL, Mootha VK, Sosinsky GE, Ellisman MH, Ting AY. 2012. Engineered ascorbate peroxidase as a genetically encoded reporter for electron microscopy. Nat Biotechnol. 30(11):1143-8.

4. Shu X, Lev-Ram V, Deerinck TJ, Qi Y, Ramko EB, Davidson MW, Jin Y, Ellisman MH, Tsien RY. 2011. A genetically encoded tag for correlated light and electron microscopy of intact cells, tissues, and organisms. PLoS Biol. 9(4):e1001041.

5. Briggman KL, Helmstaedter M, Denk W. 2011. Wiring specificity in the direction-selectivity circuit of the retina. Nature. 471(7337):183-8.

6. Helmstaedter M, Briggman KL, Turaga SC, Jain V, Seung HS, Denk W.2013. Connectomic reconstruction of the inner plexiform layer in the mouse retina. Nature. 500(7461):168-74.

Keywords:  Neuroscience