to BioTechniques free email alert service to receive content updates.
The 11%-ers
 
Nathan S. Blow, Ph.D.
Editor-in-Chief, BioTechniques
BioTechniques, Vol. 55, No. 4, October 2013, p. 157
Full Text (PDF)

When it comes to trying out new methods, or implementing a technique in a new lab, trial and error are often the order of the day. This is not to say the methodology is bad, only that variations in environmental conditions, equipment, or batches of reagents can interfere with the results. For example, even ambient temperature and humidity can influence the outcome of an experiment. Understanding the nuances of experimental design can increase the probability of producing data that are valid and, importantly, reproducible by others.

qPCR is an example of a technique where experimental variation can crop up. Normalizing your results can be highly dependent on the source of the sample—what was a good reference gene for normalization in one cell type might show variability in another. As it turns out, there is no universal reference gene suitable for all qPCR experiments. And maybe we don't need a universal set of primers, probes, reagents, or reference genes either. Instead, I would argue that it is more critical for researchers at the bench to take command of their experiments by understanding experimental design, identifying factors that may skew results, and then optimizing assays to fit their research goals.

In this issue of BioTechniques, our contributing writer Sarah Webb explores the rise of qPCR, including the challenges researchers have faced in implementing qPCR assays and in interpreting published qPCR data. The issues here have been so pervasive that a group of leading qPCR developers published a set of guidelines for performing qPCR experiments. These recommendations were created in an effort to standardize how qPCR assays are designed and provide guidance for the novice scientist interested in using this method. But as Sarah discovered during her reporting, only 11% of the studies employing qPCR published last year cited these guidelines.

I'm sure there are other instances where guidelines recommended by organizations or large working groups have gone unheeded by the greater research community. I suspect a major issue is that we have moved towards a more kit-focused methods mindset that does not necessarily foster a deeper understanding of the chemistry, biology, and physics behind the methods that we regularly use. Even the collaborative nature of science makes it possible to avoid learning the nuts-and-bolts of a new technique, since it is usually easier to find someone else who specializes in that method and then collaborate with that person on the experiment.

Being able to explain the science behind the experiments calls for a deep understanding of the methods used in a study. Flaws in experimental design or execution lead to studies where outcomes and conclusions are called into question. Could this be one of the reasons for the growing trend of retractions that we see in the scientific literature today? I would guess that a lack of understanding of the methods used, or how to interpret the results produced by those methods, plays a major role.

The challenge here is instilling a mindset where the methods implemented in a study are as important as the results obtained. Now, some might argue that this idea already exists. But is this true? Why have so many journals moved their methods and materials sections to online supplementary materials? I know the argument is that placing this information online gives authors more space to describe their protocols, a point I very much agree with. But I worry at the same time that, by placing all methods and material in large supplementary files, the underlying message is that methods are not as important as the background, results, or discussion.

To me, the solution is simple: senior scientists should require their trainees understand how kits and instruments truly work. Journals should require scientists to present their methods and materials in sufficient detail—in print and not exclusively online—so that anyone who wants to can replicate the experiments. And all scientists should make it a point to read and understand experimental guidelines presented by experts in the field. Adoption, should not necessarily be required, but the ability to understand and consider such recommendations is important. The next time you design a qPCR assay, think about trying to be in that 11%—in the long run it could very well help you and those reading your article.

Share your thoughts with us at: [email protected].