- 现金
- 6897 元
- 精华
- 8
- 帖子
- 2465
- 注册时间
- 2016-1-26
- 最后登录
- 2020-7-6
|
http://www.nature.com/news/go-forth-and-replicate-1.20473
Go forth and replicate!
To make replication studies more useful, researchers must make more of them, funders must encourage them and journals must publish them.
24 August 2016
No scientist wants to be the first to try to replicate another’s promising study: much better to know what happened when others tried it. Long before replication or reproducibility became major talking points, scientists had strategies to get the word out. Gossip was one. Researchers would compare notes at conferences, and a patchy network would be warned about whether a study was worth building on. Or a vague comment might be buried in a related publication. Tell-tale sentences would start “In our hands”, “It is unclear why our results differed …” or “Interestingly, our results did not …”.
Related stories
Dutch agency launches first grants programme dedicated to replication
How many replication studies are enough?
First results from psychology’s largest reproducibility test
What might seem obvious — a paper on attempts and outcomes — was almost never an option. Many journals refused to consider replication studies, and a lot of researchers had no desire to start a feud if their results did not match. So scientists not in the know might waste time exploring a blind alley or be wary about truly promising research.
Things are improving. Nowadays, researchers who want to tell the scientific community about their replication studies have multiple ways to do so. They can chronicle their attempts on a blog, post on a preprint server or publish peer-reviewed work in those journals that do not require novelty. Just this year, the online platform F1000 launched the dedicated Preclinical Reproducibility and Robustness channel for refutations, confirmations or more nuanced replication studies. Other titles, including Scientific Data and the American Journal of Gastroenterology, have openly solicited replication attempts and negative results. In 2013, after controversial work on whether bioactive RNA molecules could cross from the digestive tract to the bloodstream, Nature Biotechnology declared itself “receptive to replication”, provided that such studies illuminate crucial research questions (Nature Biotechnol. 31, 943; 2013).
The psychology community is a leader in this: Perspectives on Psychological Science has begun publishing a new type of article, and pioneering a new form of collaboration. It asks psychologists to nominate an influential study for replication and to draw up a plan. The original author is invited to offer suggestions on the protocol, multiple labs volunteer to collect data, and results — whatever they may be — are published as a registered replication report (RRR). So far, three have been published, each with a perspective by the original authors.
Get it out there
Yet it would be inefficient to pursue such projects for more than a sliver of publications. Most replication attempts are not organized collaborations, but individual laboratories testing the next stage of their research. If those results were shared, science would benefit.
Why doesn’t this happen more often? Because the replication ecosystem, such as it is, lacks visibility, value and conventions.
When a researcher happens on an exciting paper, there is no easy way to learn about replication attempts. Replication studies are not automatically or consistently linked to original papers on journal websites, PubPeer or PubMed. When a replication attempt is mentioned in passing in a broader study, there is no way to capture it. Journals cannot be expected to curate all replication attempts of papers they publish, although they should support technology that aggregates and disseminates that information. And they should be open to publishing in-depth replication attempts for original papers. For example, Scientific Reports encourages critique by offering to waive its article-processing charge for a peer-reviewed refutation of an article published in the journal.
“Conventions around replication are in their infancy — even the vocabulary is inadequate.”
Increased visibility would raise the value of a replication attempt, but also increase the risk of retaliation against replicators. There is little reward for taking that risk. A published replication currently does little to raise the esteem of the replicator with hiring committees or grant reviewers. This creates a chicken–egg problem — researchers don’t want to conduct and publish rigorous replication studies because they are not valued, and replication studies are not valued because few are published. Commendably, funders such as the Laura and John Arnold Foundation in the United States and the Netherlands Organisation for Scientific Research are explicitly supporting replication studies, and setting high expectations for publication. Scientists can help to ensure that such studies are valued by citing them and by discussing them on social media.
Conventions around replication studies are in their infancy — even the vocabulary is inadequate. Editors who coordinate RRRs strive to avoid loaded labels such as ‘successful’ and ‘failed’ replications. The Reproducibility Initiative, a project to help labs coordinate independent replications of their own work, also shied away from similar pronouncements after its first study. A paper is a jumble of context, experiments, results, analysis and informed speculation. Outcomes can depend on apparently trivial differences in methods, such as how vigorously reagents are mixed, as one collaboration painstakingly discovered (W. C. Hines et al. Cell Rep. 6, 779–781; 2014).
Neither are there conventions for interactions between replicators and the original authors. Some original authors have refused to share data or methodological details. In other cases, some replicators broadcast their attempts without first trying to resolve inconsistencies, a practice that leaves them more open to charges of incompetence. (Thankfully, both replicators and original authors are now backing away from name-calling.) As replication becomes more mainstream, we trust that the community will establish reasonable standards of conduct.
To foster better behaviour, replication attempts must become more common. We urge researchers to open their file drawers. We urge authors to cooperate with reasonable requests for primary data, to assume good intent and to write papers — and keep records — assuming that others will want to replicate their work. We urge funders and publishers to support tools that help researchers to thread the literature together. We welcome, and will be glad to help disseminate, results that explore the validity of key publications, including our own.
Nature 536, 373 (25 August 2016) doi:10.1038/536373a |
|