Over 60,000 COVID-19-related articles have been published in just shy of 1 year. How can the peer-review and publication processes handle such an ‘infodemic?’
The COVID-19 pandemic has exacerbated long-standing problems with scientific publication in regard to accessibility, transparency and accountability, according to Ganesh Mani (Carnegie Mellon University, PA, USA) and Tom Hope (Allen Institute for AI, WA, USA), in their recent publication in Patterns.
The response from the scientific community to the pandemic has been immense, with over 60,000 COVID-19-related publications appearing on a PubMed search (as of 6 October 2020) and spawning what has been termed an “infodemic” by the World Health Organization.
“Given the ever-increasing research volume, it will be hard for humans alone to keep pace,” commented the authors, who believe that it is time for the implementation of new policies and technologies to ensure the relevance and reliability of scientific information.
Coincidently, the average time taken for peer-review and publication of articles has dropped from 117 to 60 days in the field of virology. This rapid surge of information has led to unreliable information slipping through the net, such as in the case of hydroxychloroquine, which was provided the US FDA’s Emergency Use Authorization for treatment of COVID-19, before it was subsequently withdrawn.
News regarding the hydroxychloroquine and COVID-19 retractions highlights data remains a problem for scientific integrity and reproducibility.
“We’re going to have that same conversation with vaccines,” Mani said. “We’re going to have a lot of debates.”
The suggestions the authors make on how to improve these processes include identifying the best reviewers in a certain field, sharing reviewer comments and linking papers to related articles, retraction sites or legal rulings.
Additionally, they believe that machine learning has a place in the publication process. Previous attempts to do so have failed due to human use of figurative and ambiguous language. In order to avoid this, it may be a requirement that two versions of a research paper are submitted – one with more imaginative language for human consumption, and one with more uniform language for machine consumption.
“Putting such infrastructure in place will help society with the next strategic surprise or grand challenge, which is likely to be equally, if not more, knowledge intensive,” the authors concluded.