Having hard times reproducing your experiments?

You are not the only researcher that may have tried and failed to reproduce an experiment. Reproducibility in research has become a major talking point of scientific magazines lately, leading to the question: "Is there a reproducibility crisis?"

More than 70% of the researchers have reproducibility problems

"It is unclear why our results...", "Interestingly, our results did not...", "In our hands...": recognizing any of these tell-tale sentences?

You probably do as more than two-third of the researchers have tried and failed to reproduce another scientists experiment, according to a survey among 1576 researchers carried out by Nature, whereas publishing such peer-reviewed work in journals that do not require novelty is not that difficult.

 

Although there is a growing alarm about results that cannot be reproduced, the 'reproducibility crisis' is older than we think and goes back to the 1950s. Nature just recently performed an online questionnaire on reproduciblity research among 1576 researchers to find out how big this 'reproducibility crisis' really is. Not only did they asked if researchers have had problems with replicating another scientist's experiments (60-85% did), but also their own experiments (40-65% did), what factors contributed to the irreproducible research (many factors are related to intense competition and time pressure) and if they have tried and succeeded to publish their reproduction attempt (13% succeeded to publish an unsuccessful reproduction). (Monya Baker, Nature, 2016)

Should we worry?

Let's look at the facts; are they something to worry about?

Nature's survey showed striking similarities with another study, conducted by the American Society for Cell Biology.

  • In both surveys, more than half of those surveyed agree that there is a significant ‘reproducibility crisis’ and the percentage of researchers that has tried and failed to reproduce another scientists' experiment is over 60%.
  • However, only about 20% of the surveyed researchers said they have been contacted by another scientist's attempt to reproduce their work.
  • Nature even revealed that in 13% of the reproduction attempts, scientists were able to publish the unsuccessful reproduction.
  • The main factor that contributed to irreproducible research is subscribed to pressure to publish in a high-profile journal (~40% say so).

Alltough the results are striking, less than 31% of those surveyed think that failure to reproduce published results means that the result is probably wrong and most questioned say that they still trust the published literature. How come? 

The crisis' aspects: is it all to blame on the scientists?

When discussing the reproducibility crisis, it is key to take a more in-depth look at the various aspects of the crisis:

  • First: why is this crisis 'so big'?

Let's not be naive: there is little or no reward for just reproducing the work of other scientists, so there is also no urge to reproduce experiments if you don't have to. Reason for scientists to assume published results are true and base their own experiments on that.

  • Second: where do you publish irreproducible results?

Although science would benefit from sharing results of failed replication experiments, high-profile journals generally refuse replication attempts as the research is not innovative enough. Since your career as a scientist is dependent on your output, publishing your results in journals with lower impact factor does not sound that appealing...

  • Third: the replication ecosystem, such as it is, lacks visibility, value and conventions.

There is no easy way for a researcher to learn about the replication attempts (as we seen in the previous aspect) and the transparency of used methods by researchers hardly meets the standard of method recording. When browsing through Method sections of scientific papers, you will find that there is huge variability in the amount of Method details provided and the level of detail can be frustratingly low (even for high-impact journals like Science). So even if you would want to replicate such a published experiment, the lacking Method details does not provide you the right information to do so. And more than 70% of the researchers will not get in contact with the author to get those missing Method details (Monya Baker, Nature, 2016).

  • Finally, personnel turnover at labs is high

Most personnel in labs are only there for at most a few years to get their PhD or do a postdoctoral fellowship. Let those researchers now be the one publishing the results in most of the cases...With leaving the lab, sometimes the skills are also leaving. The explanation is then not fraud but rather because the remaining personnel don’t know all the ins and outs of the experimental technique. And sometimes even detailed protocol books aren’t always enough to back up this problem.

The Reproducibility Project & what's next?

In 2014, a collaboration between the Center for Open Science and Science Exchange was set up to independently replicate selected results from 50 high-profile papers in the field of cancer biology. The aim of the project is to provide evidence about reproducibility in cancer biology, and an opportunity to identify factors that influence reproducibility more generally. The project is merely seen as a means to identify the problem of reproducibility.

Altough individual laboratories are increasing their replication attempts to test the next stage of their research, results are often not shared. As stated before, the basis to this problem probably relies in the fact that innovation is highly prioritized over replication in science (Alberts et al., 2014; Nosek, et al., 2012) and replication attempts are often not published. But to that respect, things are improving! Researchers who want to tell the scientific community about their replication studies have now multiple ways to do so. Just recently, the online platform F1000 launched the dedicated Preclinical Reproducibility and Robustness channel.

"The Preclinical Reproducibility and Robustness channel is a platform for open and transparent publication of confirmatory and non-confirmatory studies in biomedical research. The channel is open to all scientists from both academia and industry and provides a centralized space for researchers to start an open dialogue, thereby helping to improve the reproducibility of studies."

In addition, scientific journals are now tightening their guidelines for authors to be more elaborate on their methods and raw data. Some journals, like Perspectives on Psychological Science have even begun publishing alternative types of article to get the replication of studies discussed.

If the above measures will work to get a hold on the 'Reproducibility Crisis' can be questioned though. However, the main important thing, as researchers see it, is that the word is out there and scientists themselves start thinking about how they perform their experiments and how to pin that down on paper.

So spread the word and get it out there!