This month's question from the Molecular Biology Forums (online at molecularbiology.forums.biotechniques.com) comes from the “Real-Time qPCR/qRT-PCR Methods” section. Entries have been edited for concision and clarity. Mentions of specific products and manufacturers have been retained from the original posts, but do not represent endorsements by, or the opinions of, BioTechniques.Molecular Biology Techniques Q&A
Do multiple copies of a gene affect efficiency calculations in qPCR? (Thread 32843)
Q My RT-PCR has high efficiency (130%–140%) and RT-amplification for a bacterial 16S rRNA target with a copy number of 6. The results are fine for all other targets and show efficiencies between 90%–110%, which are recommended by various manufacturers. My question is whether high efficiency calculations such as these could be caused by the presence of multiple targets? If so, how can I account for this with comparative Cq gene expression methods?
A If the transcript is so highly expressed that the Cq value is very low, that could be a problem. Another possibility is inhibition in the undiluted sample in a dilution series, which would raise that Cq and decrease Cq spacing between samples, causing an abnormally high efficiency calculation. However, you would expect this to also occur with other primer sets. Both of these problems could be solved by diluting the templates.
Q The Cq value for the RT+ reaction is 5, and the Cq value for the reaction without RT is 27. If I try further diluting to get the Cq to 10–15, will this affect comparison with low copy number genes?
A You should try to get the efficiency of all reactions as close to 100% as possible so amplification is equal for all targets regardless of whether or not one template was diluted. Some recommend using a reference gene with lower levels of expression to avoid this issue altogether. With low Cq, a larger portion of the PCR product detected is actually the original template and its linear amplification. So, the amplification is not comparable because the housekeeping Cq 5 data is both linear and geometric amplification, while the amplification of the diluted samples, and less concentrated genes, is all geometric.
Your negative control background Cq of 27 is not a problem at all if your samples have a Cq of 20 or below; any contribution to the data coming up at Cq 27 is below the level of error of the procedure.
Q I thought a consistent input of cDNA was required?
A Of course, it is necessary to add an equal amount of input template to all samples for one primer set. When you move to another primer set, the conditions are different since primer sets can work with varying efficiencies and amplify different length products. The purpose of the dilution series and efficiency calculation is to demonstrate that, even though the primers are different, they all either work with optimal and equal efficiency or differ consistently. This also shows the range where the efficiency is consistent.
Why shouldn't you be able to dilute, as long as you do it equally across all samples? The experimental target samples are expressed at different levels, as if at different dilutions, and those results are considered valid because the standard dilutions determined the range through which the PCR efficiency remains the same. Likewise, as long as the dilution used for the reference gene gives results within the range of good PCR efficiency, the PCR should give accurate results. Quantitatively, it doesn't matter if the Cq values increase in the housekeeping genes because this is a semi-quantitative experiment and all the samples will be diluted by the same amount. Since the data are all relative, and not absolute, this does not matter.
Q This makes sense to me in terms of standardization, but how does this correction help between targets? If you input 10 ng of target A with 100% efficiency, and 0.1 ng of target B (high abundance) with 100% efficiency to keep them both within their dynamic range, then wouldn't the differences in Cq come from differences in the amount of input material?
A In this case, your signal for 16S is so loud that you need to reduce its cDNA input. Because your samples and standards work well for all of your targets except 16S, you need to dilute each of the standards 1:1000 more than your other targets. You need two sets of sample and standard dilutions: one for your well-behaved targets and another devoted entirely to your 16S. This does not affect relative quantification calculations.
You cannot compare levels of Target A to levels of Target B using Cq value comparisons; each different target reaction has different kinetics, so they are not directly comparable on a Cq basis.
The use of a reference gene to correct for sample loading is only as good as the reference gene you use. It is better to know exact and identical amounts of what has been loaded per reaction and that the amount added falls within the target's standard curve. When such a method is followed, a perfect reference gene would then give the same Cq value in all samples.
Q In many quantitation methods, the equation incorporates both the Cq of the housekeeper and the target. The reference gene should be expressed consistently, but if the input is 1000-fold lower than the target, wouldn't the expression be affected by the amount of input compared to the target?
I was initially going to use a delta Cq method, which assumes a number of things like equal efficiency and equal input material. But it looks like the difference between target and reference doesn't matter for the Pfaffl equation. If I use the Pfaffl equation, can I avoid this issue altogether?
A Yes, when using the Pfaffl equation, everything is relative. Whatever Cq scale you use for a target and whatever Cq scale you use for the reference genes does not matter as long as they all were initially evaluated within their log-linear, high-efficiency of amplification dynamic ranges.
A You might also want to read what the MIQE Guidelines say on this point and also how the Biogazelle software approaches your question.