Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015

@article{Camerer2018EvaluatingTR,
  title={Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015},
  author={Colin Camerer and Anna Dreber and Felix Holzmeister and Teck-Hua Ho and J{\"u}rgen Huber and Magnus Johannesson and Michael Kirchler and Gideon Nave and Brian A. Nosek and Thomas Pfeiffer and Adam Altmejd and Nick Buttrick and Taizan Chan and Yiling Chen and Eskil Forsell and Anup Gampa and Emma Heikensten and Lily Hummer and Taisuke Imai and Siri Isaksson and Dylan A Manfredi and Julia Rose and Eric-Jan Wagenmakers and Hang Wu},
  journal={Nature Human Behaviour},
  year={2018},
  volume={2},
  pages={637-644}
}
Being able to replicate scientific findings is crucial for scientific progress1–15. We replicate 21 systematically selected experimental studies in the social sciences published in Nature and Science between 2010 and 201516–36. The replications follow analysis plans reviewed by the original authors and pre-registered prior to the replications. The replications are high powered, with sample sizes on average about five times higher than in the original studies. We find a significant effect in the… 
Statistical methods for replicability assessment
Large-scale replication studies like the Reproducibility Project: Psychology (RP:P) provide invaluable systematic data on scientific replicability, but most analyses and interpretations of the data
Replication is fundamental, but is it common? A call for scientific self-reflection and contemporary research practices in gambling-related research
Researchers around the world have observed that in many fields the published peerreviewed literature reflects a widespread publication bias that favours statistically significant and novel outcomes
Predicting the replicability of social science lab experiments
TLDR
The models presented in this paper are simple tools to produce cheap, prognostic replicability metrics that could be useful in institutionalizing the process of evaluation of new findings and guiding resources to those direct replications that are likely to be most informative.
Many Labs 2: Investigating Variation in Replicability Across Samples and Settings
We conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples
Improving Psychological Science through Transparency and Openness: An Overview
TLDR
An overview of recent discussions concerning replicability and best practices in mainstream psychology with an emphasis on the practical benefists to both researchers and the field as a whole is provided.
Best practices for interpreting large-scale replications
  • J. Ackerman
  • Medicine, Psychology
    Nature Human Behaviour
  • 2018
TLDR
Comment on the Social Science Replication Project (SSRP) provides an illuminating look at research published in two top scientific journals, and the prediction market and complementary replicability indicators are extremely useful for understanding how the authors can evaluate effects.
How Do We Choose Our Giants? Perceptions of Replicability in Psychological Science
Judgments regarding replicability are vital to scientific progress. The metaphor of “standing on the shoulders of giants” encapsulates the notion that progress is made when new discoveries build on
How (not) to measure replication
The replicability crisis refers to the apparent failures to replicate both important and typical positive experimental claims in psychological science and biomedicine, failures which have gained
Reproducibility and replicability crisis: How management compares to psychology and economics – A systematic review of literature
Abstract The past decade has been marked by concerns regarding the replicability and reproducibility of published research in the social sciences. Publicized failures to replicate landmark studies,
Laypeople Can Predict Which Social-Science Studies Will Be Replicated Successfully
Large-scale collaborative projects recently demonstrated that several key findings from the social-science literature could not be replicated successfully. Here, we assess the extent to which a
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 99 REFERENCES
Evaluating replicability of laboratory experiments in economics
TLDR
To contribute data about replicability in economics, 18 studies published in the American Economic Review and the Quarterly Journal of Economics between 2011 and 2014 are replicated, finding that two-thirds of the 18 studies examined yielded replicable estimates of effect size and direction.
What Should Researchers Expect When They Replicate Studies? A Statistical View of Replicability in Psychological Science
  • Prasad Patil, R. Peng, J. Leek
  • Psychology, Medicine
    Perspectives on psychological science : a journal of the Association for Psychological Science
  • 2016
TLDR
The results of the Reproducibility Project: Psychology can be viewed as statistically consistent with what one might expect when performing a large-scale replication experiment.
Many Labs 2: Investigating Variation in Replicability Across Samples and Settings
We conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples
Many Labs 2: Investigating Variation in Replicability Across Sample and Setting
We conducted preregistered replications of 28 classic and contemporary published findings with protocols that were peer reviewed in advance to examine variation in effect magnitudes across sample and
A Bayesian Perspective on the Reproducibility Project: Psychology
TLDR
The apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes due to small sample sizes and publication bias in the psychological literature.
Estimating the reproducibility of psychological science
TLDR
A large-scale assessment suggests that experimental reproducibility in psychology leaves a lot to be desired, and correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
Investigating variation in replicability: A “Many Labs” replication project
Although replication is a central tenet of science, direct replications are rare in psychology. This research tested variation in the replicability of 13 classic and contemporary effects across 36
A Bayesian bird's eye view of ‘Replications of important results in social psychology’
TLDR
Three Bayesian methods were applied to reanalyse the preregistered contributions to the Social Psychology special issue ‘Replications of Important Results in Social Psychology’ to find evidence of weak support for the null hypothesis against a default one-sided alternative.
Small Telescopes: Detectability and the Evaluation of Replication Results
This paper introduces a new approach for evaluating replication results. It combines effect-size estimation with hypothesis testing, assessing the extent to which the replication results are
Using prediction markets to estimate the reproducibility of scientific research
TLDR
It is argued that prediction markets could be used to obtain speedy information about reproducibility at low cost and could potentially even beused to determine which studies to replicate to optimally allocate limited resources into replications.
...
1
2
3
4
5
...