to BioTechniques free email alert service to receive content updates.
Getting over qPCR's technical hurdles
 
Sarah Webb, Ph.D.
BioTechniques, Vol. 55, No. 4, October 2013, pp. 165–168
Full Text (PDF)

Even though it has become a “workhorse” technique for quantifying nucleic acids, qPCR continues to be plagued by problems of reproducibility and reliability. Yet when carefully designed, optimized, and validated, qPCR experiments are incredibly accurate, according to Stephen Bustin of Anglia Ruskin University in Chelmsford, UK. However, in far too many cases, researchers don't carefully optimize and validate their assays or report enough information on reagents, primers and procedures for the research community to evaluate their methods.

When Bustin started using qPCR for gene expression profiling in cancer metastases during the late 1990s, he quickly discovered the pitfalls and problems associated with qPCR data— going so far as to write an early review article bringing the issues out in the open. As a result, he was asked to serve as an expert witness in a high profile UK court case where his analysis of qPCR data showed flaws that wrongly linked the MMR vaccine with autism.

Years of discussions focusing on these technical challenges amongst Bustin and like-minded colleagues eventually culminated in a 2009 article published in the journal Clinical Chemistry that provides a set of guidelines for performing qPCR experiments. These guidelines are known as the minimum information for publication of quantitative real-time PCR experiments (MIQE) (1). Four years later though, only 11%of papers that report qPCR experiments cite those guidelines.





Quality of the sample

qPCR is a deceptively simple technique, according Jo Vandesompele from Ghent University in Belgium, a MIQE co-author. “My nine-year-old daughter could do these experiments. They're powerful, easy, and fast, but there's so much room for error and misinterpretation.”

The potential problems can crop up long before any steps to quantify nucleic acids are performed. Issues including how and when a sample is taken and the conditions used for purification, stabilization, and storage can all factor into the results of an experiment, says Mikael Kubista of the TATAA Biocenter, another MIQE co-author.

Sampling is particularly important in clinical and diagnostic studies since RNA is less chemically stable, more challenging to work with, and more sensitive to sampling and storage conditions than DNA or protein. RNA expression can vary depending on an individual's environment or physical condition just before a blood draw, for example. In addition, many studies examine stored samples, but anticoagulant treatment of blood samples and formalin fixation or paraffin embedding of tissue/tumor samples can damage RNA. As a result, expression levels measured in these samples may not accurately reflect what's going on in a live subject, notes Kubista.

In 2009, the European Union launched SPIDIA, a 4 year, 13 million Euro project involving both companies and research institutions, to examine pre-analytic issues in molecular diagnostics. Across Europe, researchers performed ring trials—analyses where different laboratories follow a standard protocol—and then compared results across laboratories. In one study, 30% of the labs had problems with their RNA extraction, Kubista says.

To figure out where variability arises in a study as it's being designed, Kubista and his colleagues developed a procedure for a fully nested pilot study. For example, a pilot study examining RNA levels in the liver tissue of cows might include liver samples from three different cows. Those liver samples would be divided into three different parts for separate extraction of the nucleic acids. Each sample would then be divided again to examine reverse transcription, and after a final sample division, researchers would examine RNA expression with qPCR. Using statistics from those studies, researchers now have data that show where variation is likely to occur and where procedures need improvement.




An example of a fully nested study to determine variation in qPCR experimental conditions. It should be noted that all steps are performed in triplicate. (Click to enlarge)


Optimizing success

Another pressure point in qPCR is the assay itself. Primers need to be well-designed, and the PCR reaction must amplify the target sequence without also amplifying pseudogenes or other genomic DNA. All this means that researchers need to understand and optimize the efficiency of their PCR reactions—ideally, PCR efficiency should be between 90% and 105%. But this is something many researchers fail to do.

One of the reasons that researchers have largely ignored these issues could be due to the “kit-based culture” surrounding PCR, says Jim Huggett of the Laboratory of Government Chemists (LGC) in the United Kingdom, another MIQE co-author. As more and more reagents come in a box or as a pre-prepared mixes, researchers get out of the habit of reporting basic details such as primer sequences or exactly how a gene was reverse transcribed. After the MIQE guidelines were first published, some companies did express concerns that their proprietary primer sequences could be disclosed. So a 2011 amendment (2) offered ways for researchers to report their procedures without revealing the exact sequences.

History should not always be a guide, however. When it comes to developing new assays, Bustin cautions that primers from papers 8 or 10 years old are unlikely to be optimally efficient, and it can be maddeningly difficult to tweak existing primers to improve efficiency. With all of the tools currently available for primer design, it's often easier to start from scratch with a new set of primers that you optimize to your assay.

When you need to go digital

Digital PCR is another twist on the idea of quantifying DNA and RNA. Though unlikely to replace qPCR, says Huggett, digital PCR offers some advantages that may give the technique an edge in certain studies.

Rather than finding the concentration of nucleic acids by amplifying a single sample, digital PCR partitions nucleic acids into many single reactions. Researchers can then quantif y the amount of a target sequence in the original sample by counting the number of partitions containing the sequence of interest. The technique has a far more limited dynamic range than qPCR but generally gives more precise results, which should al low researchers to measure smaller concentration differences. Though it's still early, Huggett has also noticed greater data comparability and reproducibility between labs with digital PCR.

But that's not to say that digital PCR is easier or involves less calibration or validation, Huggett cautions. The technique has a smaller dynamic range than qPCR, so it may not be appropriate for some types of analyses. He and many of the other researchers who developed the original MIQE guidelines recently published a checklist that is just as extensive for digital PCR studies (4). “With digital we wanted to get there early,” explains Huggett, so guidelines would be in place as researchers started to adopt the technology.

French researchers and scientists from RainDance Technologies, an instrument manufacturer specializing in droplet digital PCR, recently collaborated on a digital PCR study to quantify and distinguish seven different kRAS mutations in plasma from circulating tumor DNA from metastatic colorectal cancer cells (5). Because of the extremely low concentrations (often <1%) of these nucleic acids within plasma, qPCR did not provide the sensitivity needed for this type of assay, says CNRS researcher Valerie Taly of the University of Paris Sorbonne Cité, lead author on the study. -S.W.

The extraction procedures for purifying nucleic acids from a clinical sample may include molecules that either completely block or slow the activity of polymerases. These inhibitors, which are very common, can hamper PCR efficiency, says Huggett. Inhibitors that delay polymerase activity can be particularly problematic because they lead to an underestimate of the amount of DNA or RNA in a sample. In addition, if you're looking at mRNA, assays can amplify pseudogenes or genomic DNA alongside your sequences of interest, producing inaccurate measurements, Kubista says. Making normalization the norm

In a qPCR assay, one common way to measure changes in gene expression is to normalize your gene of interest against another gene that's stably expressed. But the inappropriate use of a single, unvalidated reference gene remains the most common flaw in current qPCR studies.

It's not unusual to see a paper where researchers report 4- or 5-fold differences in the expression levels of 3 or 4 genes, Bustin says. The problem is that if the comparison is to a single reference gene that already shows a 5-fold variability in expression, the results are then likely to be inaccurate or unreliable.

In 2002, Vandesompele and his colleagues reported for the first time that commonly used reference genes can have variable expression levels in different cells or cell types, potentially introducing error into gene expression studies. To overcome this issue, Vandesompele developed a method for normalizing qPCR data against multiple reference genes, producing more accurate results. The use of multiple reference genes remains the gold standard within the qPCR community, Vandesompele says.

Scientists also can't assume reference genes used in an earlier, similar study will work in their experiment. As part of their ongoing research into the glycobiology of ovarian cancer, Francis Jacob of the University Hospital in Basel and his colleagues were looking for reference genes to use for their expression studies. In a March PLoS One article (3), they examined the expression levels of various reference genes across a set of ovarian and colon cancer cell lines. The most commonly used reference gene based on their literature review, GAPDH, proved to be the least stably expressed. Although the researchers did identify three stable candidate reference genes for their colon cancer and ovarian cancer cell lines, only one of these showed up in both types of cell lines. Finding a friend

The 85 step checklist within the MIQE guidelines might seem a tad overwhelming at first glance. But Bustin stresses that these procedures are something that all researchers can do. An experienced researcher can develop and validate a qPCR assay within two weeks, he estimates, while someone new to the process might need a month.

The other thing to keep in mind is that the MIQE guidelines are not rigid rules for how experiments must be done—they're principles for improving experimental workflow and using good controls, says Vandesompele. He and Bustin are quick to note that the most important principle of the MIQE guidelines is transparency; researchers should report what they've done in an experiment and why. Experimental complications might mean that your PCR efficiency doesn't reach optimal levels or that you can only use one validated reference gene. If that information is reported in the paper, reviewers and readers can better evaluate your findings or even replicate your experiments on their own.

A variety of resources is available to assist researchers as they design experiments and analyze data. Instrument manufacturers have embraced the MIQE guidelines and have provided assistance with user training. For example, a couple of years ago Bio-Rad organized road shows where they explained the guidelines and how to perform MIQE compatible assays. In addition, manufacturers including Bio-Rad, Life Technologies, and Roche have information and training tools on their websites.

TATAA Biocenter also offers hands-on training in qPCR methods. Each year 700 researchers participate in these courses, which are held regularly in TATAA's permanent labs in Stockholm and Prague, as well as at other locations worldwide. TATAA also designs new assays for companies and hospitals and trains the personnel who will carry them out.

In addition to formal courses, a number of open-source algorithms and tools are also available to help qPCR data analysis. qPCR data is typically reported in a universal format called RDML. Instrument vendors and software vendors are now making their software compatible with this format, says Vandesompele, who is also CEO of Biogazelle, a company that sells qbase+ software for qPCR data analysis.

Even though the process is moving slower than they would like, qPCR researchers are optimistic that the community is moving toward a wider adoption of the MIQE guidelines. And that will represent an important turning point for qPCR. Instead of worrying about whether a study used the appropriate reference gene, Bustin says, researchers can focus on the biological and clinical details. “As long as you know the technical issues are taken care of,” he says, “it allows you to transform the debate to a much higher level.”

References
1.) Bustin, S.A.. 2009. The MIQE Guidelines: Minimum Information for Publication of Quantitative Real-Time PCR Experiments. Clinical Chemistry 55:611-622.

2.) Bustin, S.A.. 2011. Primer Sequence Disclosure: A Clarification of the MIQE Guidelines. Clinical Chemistry 57:919-921.

3.) Jacob, F. 2013. Careful Selection of Reference Genes Is Required for Reliable Performance of RT-qPCR in Human Normal and Cancer Cell Lines. PLOS ONE 8:e59180.

4.) Huggett, JF. 2013. The digital MIQE guidelines: Minimum Information for Publication of Quantitative Digital PCR Experiments. Clin Chem 59:892-902.

5.) Taly, V. 2013. Multiplex Picodroplet Digital PCR to Detect KRAS Mutations in Circulating DNA from the Plasma of Colorectal Cancer Patients. Clinical Chemistry 59:11.