A survey of 635 completed clinical trials funded by the National Institutes of Health (NIH) has found that less than half of the results were published within 30 months of completion. Meanwhile, a third of the results remain unpublished after 51 months post-completion (1).
Comparing the data deposited in the NIH’s clinical trials database with the data in the NIH’s Medline peer-review biomedical literature citation index, Yale School of Medicine researchers found that among 635 completed trials, 68% were published within the median of 51 months to follow-up but 32%, remained unpublished. They also determined that the trials that finished in 2007 or 2008 were more likely to get published after 30 months, compared to trials finished before 2007.
“These issues around selective publication and even selective outcome reporting were those studies that are published don’t always include all of the outcomes that were studied,” Ross said.
The team focused on clinical trials funded by the NIH, the largest federal funding agency that supplies $12 billion of public resources for research each year. The pool of clinical trials was further narrowed to those registered after September 30, 2005, and completed by December 31, 2008; leaving at least 30 months to publish the results.
“Here you have research sponsored by the government, participants who are volunteering their time to be a part of research,” said Ross. “There’s really no excuse for not getting these published and getting the results out to the scientific community quickly.”
In addition, unpublished results lead to other problems, including misunderstandings or evidence proving the harmful effects of devices and drugs. If such outcomes are not reported, patients could be at risk. Also, unpublished results could lead to redundant efforts as scientists keep going down the same path and can’t learn from work done by others before them.
“We tried to contact investigators and really determine why a study that had been supported by any means, why weren’t they being published?” Ross said. “No one actually returned our survey and we had such a very small response, nobody wanted to talk about it.”
In the end, the researchers couldn’t find out why the studies that were published were. They contemplate that those that weren’t published could have come across negative findings, had limited resources, or were not accepted by journals for publication.
Previously, the team used the NIH’s ClinicalTrials.gov registry for trials financially supported by government and industry to see if studies or results were published selectively (2). What they found was that most studies left out some important details regarding their results. Specifically, many did not include the completion date of the trials or contact information.
But the study did turn their attention to finding areas that needed improvement. According to Ross, these areas include how the legislation should enforce rules, the penalties for people who don’t report their results, and different restrictions for drug and behavioral studies. Ross proposes that the NIH broaden their regulations by requiring all trials to report their results within a year or withhold funding until results are published.
In any case, this study was funded by the NIH, and now the agency is planning to follow-up with investigators to determine what went wrong. All in all, Ross believes that current efforts to published findings from clinical trials are overall weak.
“It’s wasteful and inefficient,” said Ross. “It’s not respectful for the patients enrolling in these trials”
- Ross JS, Tse T, Zarin DA, Xu H, Zhou L and Krumholz HM. 2012. Publication of NIH funded trials registered in ClinicalTrials.gov: cross sectional analysis. Brit Med J. doi: 10.1136/bmj.d7292
- Ross JS, Mulvey GK, Hines EM, Nissen SE and Krumholz HM. 2009. Trial publication after registration in ClinicalTrials.gov: a cross-sectional analysis. PLoS Med 9 doi: 10.1371/journal.pmed.1000144.