to BioTechniques free email alert service to receive content updates.
SUPPORTING MORE RELIABLE RESULTS
 
Sarah Webb, Ph.D.
BioTechniques, Vol. 59, No. 2, August 2015, pp. 57–61
Full Text (PDF)
Abstract

Studies are finding that many groundbreaking published results with implications for human health and disease cannot be replicated. Sarah Webb explores how reproducibility issues in basic science are creating bumps on the road to the clinic.

Glenn Begley was leading the hematology/oncology group at Amgen when he first noticed the problem. As part of his record keeping efforts, he would document each project his group initiated along with why it was terminated. In reviewing those records, he saw that nearly 90 percent of studies based on published literature, findings such as novel pathways and oncogenes, could not be reproduced by the scientists at Amgen.



It is standard practice among large biotechnology and pharmaceutical companies such as Amgen to repeat experiments from promising academic studies that seem ripe for further development. In this case, the Amgen researchers exchanged reagents, arranged visits to the laboratories that had carried out the original studies, and even asked the original researchers to repeat their own experiments. In the end, the extra effort did not change the low reproducibility rates Begley had uncovered.

“We were shocked, frankly, to find that they were unable to reproduce their own experiments published in Nature, Science, and Cell,” he says.

In 2012, Begley co-authored a Commentary in the journal Nature (1) with Lee Ellis of M.D. Anderson Cancer Center highlighting the reproducibility problems they were seeing. Begley and Ellis were not the first to bring up the difficulties of moving basic research findings to clinical settings—a group at Bayer Health had published an article expressing similar concerns in 2011 (2). The Bayer group reported that 75–80 percent of projects had been terminated because basic research findings couldn't be repeated. Initially, Begley says, he and Ellis faced sharp criticism for their commentary and its implications. But in many ways such reports should not have come as a great shock to the scientific community—basic researchers themselves had been raising concerns about reproducibility issues associated with cell lines and antibodies for years.

Focus on reproducibility

Problems with reproducing biomedical studies have led to growing questions about the quality of basic research in general. Analyses such as those of Begley and the Bayer group have been published over the last several years suggesting that anywhere from 50 to 90 percent of groundbreaking results reported in top-tier scientific journals are not reproducible for one reason or another.

Beyond the expense of follow-up experiments, chasing seemingly promising but faulty scientific leads slows scientific progress. A splashy result based on faulty experimentation or analysis can lead groups of researchers to pursue lines of inquiry that will lead nowhere. And efforts to translate interesting basic research into the clinic can be stymied when those initial results don't hold water, says Begley, who is now Chief Scientific Officer at Tetralogic Pharmaceuticals in Malvern, Pennsylvania. In early June, a paper from the Global Biological Standards Institute, a Washington, DC-based non-profit focused on improving research quality, estimated that 50 percent of preclinical research is not reproducible, work which cost an estimated $28 billion to produce (3).

For smaller companies, the stakes are especially high. According to Begley, large companies typically don't take literature findings at face value and repeat experiments before they try to build on them. But small companies often don't have the time or resources to do replicative studies before trying to commercialize a discovery. So, just where does the reproducibility issue start, and how can it be changed?

Some contend that at the core, biology itself is complex and messy, and biological systems have inherent variability and noise. But when used to dismiss reproducibility problems, “that argument is somewhere between voodoo and the Middle Ages,” says John Ioannidis of Stanford University, who believes that line of reasoning grossly misrepresents the scientific process.

Anne Plant of the National Institute of Standards and Technology (NIST) thinks that labeling the problem as the “reproducibility” crisis actually oversimplifies the situation since reproducibility is in fact just one piece of the puzzle. Even if two research groups arrive at the same answer, that doesn't mean that it's right, she adds, and discrepancies between findings don't always mean that one is right and the other wrong. In a position paper published in Nature Methods, she and her NIST colleagues highlighted six elements—one of which is reproducibility—that contribute to the accuracy of a scientific measurement (4). The key, she says, is for researchers to know enough detail about experiments and measurements to understand why results might differ.

  1    2    3