A 2016 survey by the journal Nature put stark numbers to the scope of science’s reproducibility problems. More than 70% of researchers across STEM fields said they’d tried and failed to replicate another scientist’s findings, with 52% of respondents agreeing that there is a “significant” reproducibility crisis.
A new study has now sought to update Nature’s nearly decade-old numbers while also homing in on a specific field: biomedicine. In a survey of 1,630 biomedical researchers from around the world, including 819 who work in clinical research, 72% of respondents agreed that the field is facing a reproducibility crisis.
What’s more, 62% of the researchers blamed a culture of “publish or perish” for the crisis, with the vast majority citing little available funding to replicate findings and just 16% saying their institution had procedures to improve research reproducibility.
The results were published in PLOS Biology on Nov. 5.
The findings were “validation of what we anticipated,” Kelly Cobey, Ph.D., a metascientist studying research and publishing practices at the University of Ottawa who led the study, told Fierce Biotech in an interview.
Broadly speaking, she said, the results are in line with those of the prior Nature survey as well as studies in other fields.
The reproducibility crisis is made up of two distinct but interrelated problems, Cobey said. The first is that many scientific papers don’t communicate their methods clearly enough, or provide clear enough access to data or code, such that other researchers are unable to tell exactly what the authors did well enough to be able to interpret it or repeat it if necessary.
The second problem, which partially stems from the first, is that replications of past studies often produce different results than the originals.
Studies to replicate past results are “not done, and when they are done it often doesn't go well in many, many disciplines,” Cobey said.
Fixing the reproducibility crisis requires researchers to have the time, funding and incentive to change their behavior, Cobey explained. Right now, scientists are rewarded with promotions and competitive grants if they publish many papers in high-profile journals, a “publish or perish” culture.
Instead, research institutions and funders could take a quality-first approach, Cobey said, examining the impact that a scientist’s research has had on their field. Funding agencies could also implement requirements for recipients to write papers that clearly communicate how the research was done, in a way that other scientists could easily understand and replicate, she added.
For clinical trials, part of a reproducible study design is ensuring that the patient population is diverse and reflects real-world demographics. Otherwise, results found in one group of patients might not translate to others. Cobey highlighted heart health as an example, where data often comes from men—and even preclinical studies have mainly used male mice.
And even if incentives change, researchers still need to know how to design reproducible studies and communicate the results in clear, transparent language. This kind of instruction is currently lacking in many institutions, according to the survey results.
“There's not a lot of training in reproducibility,” Cobey said. “It's generally something that's learned on the job, and institutions, researchers felt, don't support reproducibility research.”
Cobey and colleagues conducted the survey by selecting 400 different journals indexed in MEDLINE and extracting the contact information for corresponding authors of papers published in them between Oct. 1, 2020, and Oct. 1, 2021. They then emailed the survey to those authors, with 7% of them ultimately responding to it.