to BioTechniques free email alert service to receive content updates.
Antibody Validation: Whose Job is it?

10/23/2013
Nathan S. Blow, PhD

Three years ago, BioTechniques published an article on antibody validation standards. Recommendations for better validation have been put forth, but what has been the outcome? Nathan Blow looks at the state of antibody validation today and asks who should be responsible for validation? Learn more...


In late June 2013, a manuscript arrived at the BioTechniques office with the title “Commercial Cdk1 antibodies recognize the centrosomal protein Cep152.”

The Rimm lab algorithm for antibody validation

As the editors started to read through the text and review the data, the importance of the study quickly became clear. The manuscript pointed out a dirty little secret that pervades the life science community: there are “good” antibodies available to researchers but there are also “bad” antibodies that have not been properly assessed. The authors of the Cdk1 study identified yet another case where improper antibody validation confounds our understanding of basic biology.

Three years earlier, BioTechniques published a review article (1) by Yale University professor David Rimm and his colleagues titled simply, “Antibody Validation.” That article examined the level of validation employed by several major antibody-producing companies prior to releasing their reagents for sale. Of the seven companies Rimm and his team analyzed, only one reached a level of validation that the authors considered to be the “gold standard,” and they demonstrated several cases of antibodies failing to work as described.

Rimm’s review spawned a number of white papers and editorials on the subject of antibody validation, and it was one of the most highly downloaded articles that year. But the questions of how best to validate an antibody for a particular application and whether validation efforts should be the responsibility of manufacturers or researchers still appear unanswered.

The initial response

Following the publication of the review article, Rimm heard from a number of researchers who also had experienced challenges with antibodies in their labs. “Unfortunately though, not much has changed in the past couple of years,” he concludes. His lab regularly validates the antibodies that they use—not by Western blot, the most commonly used and easiest of validation assays, but with the more thorough transfection and knockdown experiments suggested in his 2010 article.

Rimm, like Cdk1 author Kai Johnsson, has discovered many antibodies that don’t work as advertised. In 2011, his group was interested in studying the localization patterns of the estrogen receptor (ER) in breast cancer. It had been suggested for the past decade that cytoplasmic localization of ER might be a prognostic indicator of breast cancer and a potential therapeutic marker during treatment. Rimm’s group tested this theory by examining thousands of breast cancer and regular tissue samples. However, when they assayed a panel of commonly used ER-specific antibodies, the results were surprising—only one of the antibodies demonstrated strong cytoplasmic staining for ER, while four other antibodies showed incidences of less than 1.5%, suggesting a possible staining artifact. The antibody that showed strong cytoplasmic staining for ER failed the stringent validation tests Rimm’s team performed to assess antibody specificity, calling into question the cytoplasmic localization of ER.

The ER localization study was published in the journal Clinical Cancer Research (2), but not before a struggle with reviewers who wanted more data and explanation, according to Rimm. Even after the manuscript was accepted for publication, the journal ran a commentary in the same issue by Ellis Levin from the Long Beach VA Medical Center that suggested a range of possible explanations for the study’s findings beyond faulty antibodies. It was clear then that the burden of proof for the validity of the reagents fell on the shoulders of the researchers.

Localization issues

Kai Johnsson is a professor at Ecole Polytechnique Federale de Lausanne (EPFL). His Cdk1 manuscript (3) presented a similar story to Rimm’s ER article. Cdk1 is a master cell cycle regulator, and understanding its localization during the cell cycle is important. But past reports presented conflicting data on Cdk1’s localization patterns and whether or not its recruitment to the centrosome involves other proteins.

Johnsson and his team looked at two widely used Cdk1-specific antibodies and presented data indicating that they cross-react with the centrosomal protein Cep152. When these antibodies are tested with Western blots, cross reactivity is not a significant issue since there is a size difference between Cdk1 and Cep152, and Westerns tend to show some non-specific bands depending on the concentration of antibody used. But for other assays, such as immunofluorescence, this non-specific binding can present serious problems.

Johnsson and his co-authors determined that the suspect antibodies were raised against partially overlapping immunogens of Cdk1with amino acid compositions similar to Cep152, providing a clear explanation for the cross-reactivity. The good news is that Johnsson’s team identified three other antibodies that appear to recognize Cdk1 specifically, providing researchers with a validated set of antibodies to use in their Cdk1 studies.

The final lines of Johnsson’s article highlight the challenges of working with antibodies today: “Our work also serves as a reminder that numerous independent control experiments are needed to verify antibody specificity in immunofluorescence and other antibody-based experiments.”

But whose job is validation?

“We have spoken to companies and let them know about issues with particular antibodies,” says Rimm. But the response he gets back is often less than satisfying. “They will say, ‘We licensed this from someone else, and we don’t know the exact concentrations, but we will resend another tube to test.’” Such interactions beg the question of whether the manufacturer or the researcher is responsible for antibody validation.

Companies often supply Western blot data as proof of antibody effectiveness. But according to Rimm, it is easy to get a nice clean band on a Western using lower concentrations of antibody. And Western results might not reflect immunofluorescence experiments.

For Rimm and others, the message is clear—investigators need to validate the antibodies they use if this problem is ever going to be solved. Researchers also need outlets, databases, or repositories to make their findings public so other scientists understand which antibodies to avoid. But those resources do not exist at the moment.

Rimm hints at an even darker possibility when it comes to a lack of antibody validation—this problem could in part explain some of the non-reproducible studies cropping up in the scientific literature. In a 2012 commentary in the journal Nature, C. Glenn Begley and Lee Ellis referred to scientists at the pharmaceutical company Amgen who looked at 53 preclinical papers and were able to reproduce the findings in only 11% of the articles. While improper antibody validation is not likely the culprit in all of these cases, Rimm and others argue that results such as those from the ER localization study and the Cdk1 article point to a lack of antibody validation standards and present a significant hurdle when it comes to quickly translating preclinical studies into clinical applications.

References

1. Bordeaux J, Welsh A, Agarwal S, Killiam E, Baquero M, Hanna J, Anagnostou V, Rimm D. Antibody validation. Biotechniques. 2010 Mar;48(3):197-209.

2. Welsh AW, Lannin DR, Young GS, Sherman ME, Figueroa JD, Henry NL, Ryden L, Kim C, Love RR, Schiff R, Rimm DL. Cytoplasmic estrogen receptor in breast cancer. Clin Cancer Res. 2012 Jan 1;18(1):118-26.

3. Lukinavičius G, Lavogina D, G√∂nczy P, Johnsson K. Commercial Cdk1 antibodies recognize the centrosomal protein Cep152. Biotechniques. 2013 Sep;55(3):111-4.

Keywords:  antibody validation