to BioTechniques free email alert service to receive content updates.
Crowdsourcing Microscopic Analysis

06/13/2012
Andrew S. Wiecek

Thanks to advances in microscopy design and automation, researchers are faced with larger and larger image data sets. But who’s going to analyze all those images? You are. Andrew S. Wiecek takes a look at how microscopists are recruiting an army of image analysts through crowdsourcing games.

Bookmark and Share

For the past six years, researchers in Sebastian Seung’s lab at the Massachusetts Institute of Technology have been developing machine-learning approaches to process stacks of microscopic images. The idea was to automate the analysis of immense image datasets of neural networks. But the researchers realized that they would be facing significant challenges.

At Eyewire, players help nueroscientists to trace neurons through stacks of 2D EM images. Source: Eyewire





At Biogames, players can try their hand at diagnosing malaria by identifying infected blood cells. Source: Biogames

At the University of California, Los Angeles, Aydogan Ozcan and his group are working to improve the diagnosis of infectious diseases in developing countries. Source: Ozcan Research Group

Researchers Leandro Burnes (left) and Jinseop Kim (right) are part of the connectomics team in Sebastian Seung's MIT lab that developed Eyewire. Source: Leandro Burnes

“Machine learning will continue to improve in the future, yet it will produce some sort of error,” says Jinseop Kim, a researcher in Seung’s lab. “That's where humans have to get involved in order for the image analysis to yield meaningful results.”

Eventually, this realization led to the development of an online game and community called Eyewire that allows the public to complete parts of the microscopic analysis of a retinal data set by proofreading what has been done by the computer. Likewise, other microscopists are now attempting to engage the public to advance their research. But while the concept seems great for researchers, the execution and results of these games remain a work in progress.

Microscopy’s Data Deluge

While the term “data deluge” may have been coined by geneticists in response to the exponential increase of data output provided by next-generation sequencing, it’s not a problem that’s unique to that field. Thanks to automated high-throughput imaging techniques, microscopists are now faced with overwhelming datasets as well.

In the field of connectomics, labs like Seung's are mapping neural networks on the cellular level. To do so, these researchers take a section of brain tissue, slice it very thinly, and then image those slices using an electron microscope. The process is called serial sectioning and results in a stack of 2D grayscale images of a particular block of tissue. In each imaged slice, cross-sections of neurons are defined by changes in the shades of gray. But to get from these grayscale stacks of 2D images to a final 3D reconstruction, the individual neurons must be identified and traced.

Meanwhile, at the University of California, Los Angeles, Aydogan Ozcan and his group are working to improve the diagnosis of infectious diseases in developing countries. To do so, they have been developing lens-free lightweight compact microscopy attachments for cell phones, allowing physicians to bring powerful diagnostic tools to patients in remote areas (1). While these tools have the potential to improve healthcare for the masses, they also will produce billions of pieces of visual information.

“This is the big data problem,” says Ozcan. “You have these devices that increase the bandwidth, but who’s going to look at them?”

So first, researchers turned to the machines. The hope was that they could teach computers to analyze the images and trace the neurons (2) or detect malaria-infected cells without human intervention. Moving beyond traditional algorithms, microscopists have taken a machine-learning approach focused on developing artificial-intelligence (AI) programs that refine their image analysis based on human feedback.

But despite the sophistication of these programs, they are far from flawless. In connectomics, these programs prematurely stop tracing a neuron or go beyond the boundaries of one neuron into another. Because of these errors, microscopists must carefully go over the computer’s work, completing the machine’s tracings to produce an accurate reconstruction.

But relying on humans to correct the machine’s analysis is time consuming. To complete just one neuron, it takes an expert who has been trained in the analysis of connectomics images about 10-50 hours with a desktop software that was specially designed to visualize and proofread the image and data. Considering that there are an estimated 100,000 cells per cubic millimeter in the mouse retina (3), if this research were to be continued at a pace of 1-4 cells per week per researcher, it would take centuries to complete this relatively small mouse retina connectome unless you recruited an army of image analysts.

“We got to the point where we realized that there was going to be such a deluge of data, images, and science that has to be conducted, that we had to involve citizen scientists from the crowd,” says Leandro Burnes who is affiliated with Seung’s lab.

An Army of Image Analysts

The concept of crowdsourcing is not new. It involves using a group of non-experts from the general public to perform simple tasks in lieu of an expert. But recently, scientists have been attempting to harvest the wisdom of the crowd for their purposes.

An excellent example of this is the online protein-folding game called FoldIt, which was launched by researchers from the University of Washington in 2008. The game asks players to compete against one another in folding a protein into its most stable form. As a result of the game, the developers published several protein structures that were optimized by the game players, showing the potential for the public to contribute to solving complex scientific problems (4).

While FoldIt was focused on proteomics, the field of microscopy seems particularly well-suited for crowdsourcing applications. First of all, as mentioned before, machine-learning approaches still aren’t as good at high-throughput image analysis as humans. Secondly, microscopy images of a world beyond our sight has already proven appealing to the masses, as evident by the popularity of microscopy photo contests such as Nikon’s Small World and Olympus’ Bioscapes.

“We are seeing involvement from a diverse group ranging from elementary school students to members of the elderly population. Microscopy lends itself quite well to having the citizen scientists in the crowd being involved, because there are some simple microscopy analysis tasks that people can get engaged in – like proofreading and coloring in what the artificial-intelligence missed. We start with simple tutorial microscopy tasks that slowly increase in their level of difficulty, allowing participants to train and progress towards more advanced image analysis,” says Burnes.

Indeed, that type of image analysis training is what Ozcan had in mind when his group recently developed a crowdsourcing game for malaria diagnosis. In the game, non-experts are asked to diagnose malaria by identifying infected blood cells (5). In the end, untrained undergraduate gamers achieved an accuracy within 1.25% of the trained pathologists.

“We don’t always have to work with non-expert crowds like undergraduate students,” says Ozcan. “You could also use trained pathologists to correct the mistakes made by their colleagues and the algorithm, and that collective telepathy is going to be much better than one expert analysis.”

In addition, Ozcan believes his group’s game will have a more important role in training expert pathologists. Such training is tedious, so the entertaining interfaces could motivate and monitor the progress of the trainee. But before it can be used for the diagnosis of infectious diseases, the game needs to be further improved and must go through clinical trials.

Likewise, the Eyewire developers are satisfied with their initial results but know they’re still in the beta-testing phase of the game. In the four months since the project was launched, the site has had more than 45,000 visitors. In addition, the players have already finished correcting three complete neurons and are working on their fourth.

“It’s been on average one neuron per month, it’s a great start but that’s not fast enough for our ultimate goal. We need to get more people involved, and improve our game and machine learning algorithm in order to get closer to accomplishing our scientific targets out of this project,” says Kim.

As Fun as Possible

To get through the data deluge, these games will need to grow their communities of players. It means converting more visitors into players, and then keeping those players engaged with the community over the long haul.

“If we can expand our gaming communities and get more to join over the next months and years, we will be able to train and build more expert users. Then we will be able to conduct really meaningful science and research,” says Burnes. “Hopefully, we’ll be able to support the creation of diagnostic tools and therapies for some ailments that affects many members of our society.”

Attracting new players and keeping current users interested involves providing an incentive for the gamers. Above all else, these science games have a leg up on their entertainment counterparts, as they have a larger sense of purpose. But this purpose must be explained to the community, so education is an important aspect of these crowdsourcing games. For Eyewire, the purpose of the game is described on additional pages as well as information provided in an introductory video and wiki pages open to community contribution.

Then comes the idea of building a community around these games, so that individual players can interact with one another and feel like they are working together towards this greater goal. “If you can nurture a community properly, you’d be amazed what the citizen scientists can achieve,” says Burnes.

For example, members of the Eyewire community can not only see the recent work of other players while they are playing, but can also interact with one another through the community’s forum. In the forum, players have begun calling themselves “Eyewirers” and sharing some of the interesting shapes that they have found while playing the game, such as neuron segments that resemble shapes such as an ice skater, dragon, or iceberg. In addition, the community members have also become a trusted resource for identifying bugs and helping the developers improve the game’s performance.

At the end of the day, the game must be fun. If the general public is going to spend their free time playing the game, they’ll only do so if the game is entertaining them while they are learning and helping microscopists. “Making this experience as fun as possible is really important,” says Burnes. “We’ve started to engage colleagues from the gaming community to understand the tactics how to make games fun and more engaging.”

Likewise, entertainment value is one of the areas that Ozcan’s group is continuing to focus on before they begin clinical trials of their game. “We’d like to improve our interfaces and make them a little more fun. And continuing to work on our algorithms that will be getting smarter, looking at the same data but being more accurate.”

References

1. Kim, S. B. B., H. Bae, K.-I. I. Koo, M. R. Dokmeci, A. Ozcan, and A. Khademhosseini. 2012. Lens-free imaging for biological applications. Journal of laboratory automation 17(1):43-49.

2. Jain, V., H. S. Seung, and S. C. Turaga. 2010. Machines that learn to segment images: a crucial technology for connectomics. Current opinion in neurobiology 20(5):653-666.

3. Jeon, C.-J., E. Strettoi, and R. H. Masland. 1998. The major cell populations of the mouse retina. The Journal of Neuroscience 18(21):8936-8946.

4. Cooper, S., F. Khatib, A. Treuille, J. Barbero, J. Lee, M. Beenen, A. Leaver-Fay, D. Baker, Z. Popovic, and F. Players. 2010. Predicting protein structures with a multiplayer online game. Nature 466(7307):756-760.

5. Mavandadi, S., S. Dimitrov, S. Feng, F. Yu, U. Sikora, O. Yaglidere, S. Padmanabhan, K. Nielsen, and A. Ozcan. 2012. Distributed medical image analysis and diagnosis through Crowd-Sourced games: A malaria case study. PLoS ONE 7(5):e37245+.

Keywords:  microscopy