Response to Comment on “Estimating the reproducibility of psychological science”
@article{Anderson2016ResponseTC, title={Response to Comment on “Estimating the reproducibility of psychological science”}, author={Christopher J. Anderson and {\vS}těp{\'a}n Bahn{\'i}k and Michael Barnett-Cowan and Frank Bosco and Jesse J. Chandler and Christopher R. Chartier and Felix Cheung and Cody Daniel Christopherson and Andreas Cordes and Edward J. Cremata and Nicol{\'a}s Della Penna and Vivien Estel and Anna Fedor and Stanka A. Fitneva and Michael C. Frank and James A. Grange and Joshua K. Hartshorne and Fred Hasselman and Felix Henninger and Marije van der Hulst and Kai J. Jonas and Calvin K. Lai and Carmel A. Levitan and Jeremy K. Miller and Katherine Sledge Moore and Johannes M. Meixner and Marcus Robert Munafo and Koen Ilja Neijenhuijs and Gustav Nilsonne and Brian A. Nosek and Franziska Plessow and Jason M. Prenoveau and Ashley A. Ricker and Kathleen Schmidt and Jeffrey R. Spies and Stefan Stieger and Nina Strohminger and Gavin Sullivan and Robbie C. M. Aert and Marcel A.L.M. van Assen and Wolf Vanpaemel and Michelangelo Vianello and Martin Voracek and Kellylynn Zuni}, journal={Science}, year={2016}, volume={351}, pages={1037 - 1037} }
Gilbert et al. conclude that evidence from the Open Science Collaboration’s Reproducibility Project: Psychology indicates high reproducibility, given the study methodology. Their very optimistic assessment is limited by statistical misconceptions and by causal inferences from selectively interpreted, correlational data. Using the Reproducibility Project: Psychology data, both optimistic and pessimistic conclusions about reproducibility are possible, and neither are yet warranted.
150 Citations
Evaluating Psychological Research Requires More Than Attention to the N
- PsychologyPsychological science
- 2016
The article discusses research being done on the use of effect-size estimates in testing psychological theories. It references the study "Small Telescopes: Detectability and the Evaluation of…
Statistical methods for replicability assessment
- PsychologyThe Annals of Applied Statistics
- 2020
Large-scale replication studies like the Reproducibility Project: Psychology (RP:P) provide invaluable systematic data on scientific replicability, but most analyses and interpretations of the data…
Limited Usefulness of Capture Procedure and Capture Percentage for Evaluating Reproducibility in Psychological Science
- PsychologyFront. Psychol.
- 2018
Simulation results show that the performances of CPro and CPer become biased, such that researchers can easily make a wrong conclusion of successful/unsuccessful replication.
Double trouble? The communication dimension of the reproducibility crisis in experimental psychology and neuroscience
- PsychologyEuropean Journal for Philosophy of Science
- 2020
Most discussions of the reproducibility crisis focus on its epistemic aspect: the fact that the scientific community fails to follow some norms of scientific investigation, which leads to high rates…
Quantity Over Quality? Reproducible Psychological Science from a Mixed Methods Perspective
- PsychologyCollabra: Psychology
- 2020
A robust dialogue about the (un)reliability of psychological science findings has emerged in recent years. In response, metascience researchers have developed innovative tools to increase rigor,…
A Statistical Model to Investigate the Reproducibility Rate Based on Replication Experiments
- Computer ScienceInternational Statistical Review
- 2018
A statistical model is proposed to estimate the reproducibility rate and the effect of some study characteristics on its reliability, and it is suggested that the similarity between original study and the replica is not so relevant, thus mitigating some criticism directed to replication experiments.
A statistical definition for reproducibility and replicability
- PhysicsbioRxiv
- 2016
This work provides formal and informal definitions of scientific studies, reproducibility, and replicability that can be used to clarify discussions around these concepts in the scientific and popular press.
How (not) to measure replication
- Psychology
- 2021
The replicability crisis refers to the apparent failures to replicate both important and typical positive experimental claims in psychological science and biomedicine, failures which have gained…
The replication crisis, context sensitivity, and the Simpson's (Paradox)
- Psychology
- 2016
credited. The Reproducibility Project: Psychology (OSC, 2015 ) was a huge effort by many different psychologists across the world to try and assess whether the effects of a selection of papers could…
An Overview of Scientific Reproducibility: Consideration of Relevant Issues for Behavior Science/Analysis
- Biology, PsychologyPerspectives on behavior science
- 2019
Suggestions for improving the reproducibility of studies in behavior science and analysis are described throughout.
References
SHOWING 1-10 OF 13 REFERENCES
Comment on “Estimating the reproducibility of psychological science”
- PsychologyScience
- 2016
It is shown that this article contains three statistical errors and provides no support for the conclusion that the reproducibility of psychological science is surprisingly low, and that the data are consistent with the opposite conclusion.
Estimating the reproducibility of psychological science
- PsychologyScience
- 2015
A large-scale assessment suggests that experimental reproducibility in psychology leaves a lot to be desired, and correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
An Open, Large-Scale, Collaborative Effort to Estimate the Reproducibility of Psychological Science
- PsychologyPerspectives on psychological science : a journal of the Association for Psychological Science
- 2012
The Reproducibility Project is an open, large-scale, collaborative effort to systematically examine the rate and predictors of reproducibility in psychological science.
Investigating variation in replicability: A “Many Labs” replication project
- Psychology
- 2014
Although replication is a central tenet of science, direct replications are rare in psychology. This research tested variation in the replicability of thirteen classic and contemporary effects across…
Replication and Researchers' Understanding of Confidence Intervals and Standard Error Bars.
- Psychology
- 2004
Confidence intervals (CIs) and standard error bars give information about replication, but do researchers have an accurate appreciation of that information? Authors of journal articles in psychology,…
Registered Reports A Method to Increase the Credibility of Published Results
- Psychology
- 2014
Ignoring replications and negative results is bad for science. This special issue presents a novel publishing format – Registered Reports – as a partial solution. Peer review occurs prior to data…
Using prediction markets to estimate the reproducibility of scientific research
- PsychologyProceedings of the National Academy of Sciences
- 2015
It is argued that prediction markets could be used to obtain speedy information about reproducibility at low cost and could potentially even beused to determine which studies to replicate to optimally allocate limited resources into replications.
Shall we Really do it Again? The Powerful Concept of Replication is Neglected in the Social Sciences
- Sociology
- 2009
Replication is one of the most important tools for the verification of facts within the empirical sciences. A detailed examination of the notion of replication reveals that there are many different…
Many Labs 3: Evaluating participant pool quality across the academic semester via replication
- Psychology
- 2016
Confidence intervals and replication: where will the next mean fall?
- Computer SciencePsychological methods
- 2006
The authors present figures designed to assist understanding of what CIs say about replication, and they also extend the discussion to explain how p values give information about replication.