Estimating the reproducibility of psychological science

@article{Aarts2015EstimatingTR,
  title={Estimating the reproducibility of psychological science},
  author={Alexander A. Aarts and Joanna E. Anderson and Christopher J. Anderson and Peter Raymond Attridge and Angela S. Attwood and Jordan R. Axt and Molly Babel and {\vS}těp{\'a}n Bahn{\'i}k and Erica Baranski and Michael Barnett-Cowan and Elizabeth Bartmess and Jennifer S. Beer and Raoul Bell and Heather Bentley and Leah Beyan and Grace Binion and Denny Borsboom and Annick Bosch and Frank Bosco and Sara D. Bowman and Mark John Brandt and Erin L Braswell and Hilmar Brohmer and Benjamin Thomas Brown and Kristina Brown and Jovita Br{\"u}ning and Ann Calhoun-Sauls and Shannon P. Callahan and Elizabeth Chagnon and Jesse J. Chandler and Christopher R. Chartier and Felix Cheung and Cody Daniel Christopherson and Linda Cillessen and Russ Clay and Hayley M. D. Cleary and Mark D. Cloud and Michael Conn and Joanne McGrath Cohoon and Simon Columbus and Andreas Cordes and Giulio Costantini and Leslie D. Cramblet Alvarez and Edward J. Cremata and Jan Crusius and Jamie DeCoster and M. DeGaetano and Nicol{\'a}s Delia Penna and Bobby Den Bezemer and Marie Katharina Deserno and Olivia Devitt and Laura Dewitte and David G. Dobolyi and Geneva T. Dodson and M. Brent Donnellan and Ryan Donohue and Rebecca A. Dore and Angela R. Dorrough and Anna Dreber and Michelle Dugas and Elizabeth W. Dunn and Kayleigh E. Easey and Sylvia Eboigbe and Casey Eggleston and Jo Embley and Sacha Epskamp and Timothy M. Errington and Vivien Estel and Frank J. Farach and Jenelle Feather and Anna Fedor and Bel{\'e}n Fern{\'a}ndez-Castilla and Susann Fiedler and James G. Field and Stanka A. Fitneva and Taru Flagan and Amanda Lynn Forest and Eskil Forsell and Joshua D. Foster and Michael C. Frank and Rebecca S. Frazier and Heather M. Fuchs and Philip A. Gable and Jeff Galak and Elisa Maria Galliani and Anup Gampa and Sara Garcia and Douglas Gazarian and Elizabeth Rees Gilbert and Roger Giner-Sorolla and Andreas Gl{\"o}ckner and Lars Goellner and Jin X. Goh and Rebecca M. Goldberg and Patrick T. Goodbourn and Shauna Gordon-McKeon and Bryan H. Gorges and Jessie Gorges and Justin Goss and Jesse Graham and James A. Grange and Jeremy R. Gray and C.H.J. Hartgerink and Joshua K. Hartshorne and Fred Hasselman and Timothy Hayes and Emma Heikensten and Felix Henninger and John Hodsoll and Taylor Holubar and G. C. Hoogendoorn and Denise J. Humphries and Cathy On-Ying Hung and Nathali Immelman and Vanessa Claire Irsik and Georg Jahn and Frank J{\"a}kel and Marc Jekel and Magnus Johannesson and Larissa Gabrielle Johnson and David J. Johnson and Kate M. Johnson and William Johnston and Kai J. Jonas and Jennifer A. Joy-Gaba and Heather Barry Kappes and Kim Kelso and Mallory C. Kidwell and Seung K. Kim and Matthew W. Kirkhart and Bennett Kleinberg and Goran Kne{\vz}evi{\'c} and Franziska Maria Kolorz and Jolanda Jacqueline Kossakowski and Robert W Krause and J.M.T. Krijnen and Tim Kuhlmann and Yoram K. Kunkels and Megan M. Kyc and Calvin K. Lai and Aamir Laique and Daniel Lakens and Kristin A. Lane and Bethany Lassetter and Ljiljana B. Lazarevi{\'c} and Etienne P. Le Bel and Key Jung Lee and Minha Lee and Kristi M. Lemm and Carmel A. Levitan and Melissa Lewis and Lin Lin and Stephanie C. Lin and Matthias Lippold and Darren Loureiro and Ilse Luteijn and Sean P. Mackinnon and Heather N. Mainard and Denise C. Marigold and Daniel P. Martin and Tylar Martinez and E. J. Masicampo and Joshua J. Matacotta and Maya B. Mathur and Michael May and Nicole C Mechin and Pranjal H. Mehta and Johannes M. Meixner and Alissa Melinger and Jeremy K. Miller and Mallorie Miller and Katherine Sledge Moore and Marcus M{\"o}schl and Matt Motyl and Stephanie L. Muller and Marcus Robert Munafo and Koen Ilja Neijenhuijs and Taylor Nervi and Gandalf Nicolas and Gustav Nilsonne and Brian A. Nosek and Mich{\`e}le B. Nuijten and Catherine Olsson and Colleen Osborne and Lutz Ostkamp and Misha Pavel and Ian S. Penton-Voak and Olivia Kathleen Perna and Cyril R. Pernet and Marco Perugini and R. Nathan Pipitone and Michael C. Pitts and Franziska Plessow and Jason Prenoveau and Rima-Maria Rahal and Kate A. Ratliff and David A. Reinhard and Frank Renkewitz and Ashley A. Ricker and Anastasia E. Rigney and Andrew M Rivers and Mark A. Roebke and Abraham M. Rutchick and Robert S. Ryan and Onur Şahin and Anondah Saide and Gillian M. Sandstrom and David Santos and Rebecca Saxe and Ren{\'e} Schlegelmilch and Kathleen Schmidt and Sabine Scholz and Larissa Seibel and Dylan Selterman and Samuel Shaki and William Brand Simpson and H. Colleen Sinclair and Jeanine L. M. Skorinko and Agnieszka Slowik and Joel S. Snyder and Courtney K. Soderberg and Carina M. Sonnleitner and Nicholas Brant Spencer and Jeffrey R. Spies and Sara Steegen and Stefan Stieger and Nina Strohminger and Gavin Brent Sullivan and Thomas Talhelm and Megan Tapia and Anniek te Dorsthorst and Manuela Thomae and Sarah L. Thomas and Pia Tio and Frits Traets and Steve Tsang and Francis Tuerlinckx and Paul Turchan and Milan Val{\'a}{\vs}ek and Anna van 't Veer and Robbie C. M. Aert and Marcel A.L.M. van Assen and Riet Van Bork and Mathijs van de Ven and Don van den Bergh and Marije van der Hulst and Roel van Dooren and Johnny van Doorn and Daan R. van Renswoude and Hedderik van Rijn and Wolf Vanpaemel and Alejandro Echeverr{\'i}a and Melissa Vazquez and Natalia V{\'e}lez and Marieke Vermue and Mark Verschoor and Michelangelo Vianello and Martin Voracek and Gina Vuu and Eric-Jan Wagenmakers and Joanneke Weerdmeester and Ashlee Welsh and Erin C. Westgate and Joeri Wissink and Michael D. Wood and Andy Woods and Emily M. Wright and Sining Wu and Marcel Zeelenberg and Kellylynn Zuni},
  journal={Science},
  year={2015},
  volume={349}
}
Empirically analyzing empirical evidence One of the central goals in any scientific endeavor is to understand causality. Experiments that seek to demonstrate a cause/effect relation most often manipulate the postulated causal factor. Aarts et al. describe the replication of 100 experiments reported in papers published in 2008 in three high-ranking psychology journals. Assessing whether the replication and the original experiment yielded the same result according to several criteria, they find… 

How (not) to measure replication

The replicability crisis refers to the apparent failures to replicate both important and typical positive experimental claims in psychological science and biomedicine, failures which have gained

Replication in Psychological Science

Brian Nosek reported direct replication attempts of 100 experiments published in prestigious psychology journals in 2008, including experiments reported in 39 articles in Psychological Science, and found that fewer than half of them yielded a statistically significant effect.

Should We Strive to Make Science Bias-Free? A Philosophical Assessment of the Reproducibility Crisis

  • R. Hudson
  • Education
    Journal for general philosophy of science = Zeitschrift fur allgemeine Wissenschaftstheorie
  • 2021
It is argued that if the authors advocate the value-ladenness of science the result would be a deepening of the reproducibility crisis, and that for the majority of scientists the crisis is due, at least in part, to a form of publication bias.

Contextual sensitivity in scientific reproducibility

It is found that the extent to which the research topic was likely to be contextually sensitive was associated with replication success, and this relationship remained a significant predictor of replication success even after adjusting for characteristics of the original and replication studies that previously had been associated with replicate success.

On the Reproducibility of Psychological Science

The results of this reanalysis provide a compelling argument for both increasing the threshold required for declaring scientific discoveries and for adopting statistical summaries of evidence that account for the high proportion of tested hypotheses that are false.

A Statistical Model to Investigate the Reproducibility Rate Based on Replication Experiments

  • F. Pauli
  • Computer Science
    International Statistical Review
  • 2018
A statistical model is proposed to estimate the reproducibility rate and the effect of some study characteristics on its reliability, and it is suggested that the similarity between original study and the replica is not so relevant, thus mitigating some criticism directed to replication experiments.

Examining Psychological Science Through Systematic Meta-Method Analysis: A Call for Research

  • M. Elson
  • Psychology
    Advances in Methods and Practices in Psychological Science
  • 2019
Research synthesis is based on the assumption that when the same association between constructs is observed repeatedly in a field, the relationship is probably real, even if its exact magnitude can

Large-Scale Replication Projects in Contemporary Psychological Research

ABSTRACT Replication is complicated in psychological research because studies of a given psychological phenomenon can never be direct or exact replications of one another, and thus effect sizes vary

The role of replication in psychological science

The replication or reproducibility crisis in psychological science has renewed attention to philosophical aspects of its methodology. I provide herein a new, functional account of the role of

How Do We Choose Our Giants? Perceptions of Replicability in Psychological Science

Judgments regarding replicability are vital to scientific progress. The metaphor of “standing on the shoulders of giants” encapsulates the notion that progress is made when new discoveries build on
...

References

SHOWING 1-10 OF 74 REFERENCES

Continuously Cumulating Meta-Analysis and Replicability

This work presents a nontechnical introduction to the CCMA framework, and explains how it can be used to address aspects of replicability or more generally to assess quantitative evidence from numerous studies, and presents some examples and simulation results using the approach that show how the combination of evidence can yield improved results over the consideration of single studies.

What Kind of Empirical Research Should We Publish, Fund, and Reward?: A Different Perspective

  • P. Rozin
  • Psychology
    Perspectives on psychological science : a journal of the Association for Psychological Science
  • 2009
When evaluating empirical papers for publication, grant proposals, or individual contributions (e.g., awarding tenure), the basic question one should ask is how much the contribution adds to

An Open, Large-Scale, Collaborative Effort to Estimate the Reproducibility of Psychological Science

  • Brian A. NosekD. Lakens
  • Psychology
    Perspectives on psychological science : a journal of the Association for Psychological Science
  • 2012
The Reproducibility Project is an open, large-scale, collaborative effort to systematically examine the rate and predictors of reproducibility in psychological science.

“Positive” Results Increase Down the Hierarchy of the Sciences

These results support the scientific status of the social sciences against claims that they are completely subjective, by showing that, when they adopt a scientific approach to discovery, they differ from the natural sciences only by a matter of degree.

Tracking Replicability as a Method of Post-Publication Open Evaluation

This paper proposes tracking replications as a means of post-publication evaluation, both to help researchers identify reliable findings and to incentivize the publication of reliable results.

Investigating Variation in Replicability: A “Many Labs” Replication Project

Although replication is a central tenet of science, direct replications are rare in psychology. This research tested variation in the replicability of thirteen classic and contemporary effects across

Why Most Published Research Findings Are False

Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true.

Strong Inference: Certain systematic methods of scientific thinking may produce much more rapid progress than others.

Anyone who looks at the matter closely will agree that some fields of science are moving forward very much faster than others, perhaps by an order of magnitude, if numbers could be put on such estimates.

Publication Decisions and their Possible Effects on Inferences Drawn from Tests of Significance—or Vice Versa

Abstract There is some evidence that in fields where statistical tests of significance are commonly used, research which yields nonsignificant results is not published. Such research being unknown to

Shall we Really do it Again? The Powerful Concept of Replication is Neglected in the Social Sciences

Replication is one of the most important tools for the verification of facts within the empirical sciences. A detailed examination of the notion of replication reveals that there are many different
...