What Meta-Analyses Reveal About the Replicability of Psychological Research

@article{Stanley2018WhatMR,
  title={What Meta-Analyses Reveal About the Replicability of Psychological Research},
  author={T. D. Stanley and Evan C. Carter and Hristos Doucouliagos},
  journal={Psychological Bulletin},
  year={2018},
  volume={144},
  pages={1325–1346}
}
Can recent failures to replicate psychological research be explained by typical magnitudes of statistical power, bias or heterogeneity? A large survey of 12,065 estimated effect sizes from 200 meta-analyses and nearly 8,000 papers is used to assess these key dimensions of replicability. First, our survey finds that psychological research is, on average, afflicted with low statistical power. The median of median power across these 200 areas of research is about 36%, and only about 8% of studies… 
How to Detect Publication Bias in Psychological Research
Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size
Average Power: A Cautionary Note
Replication is an important contemporary issue in psychological research, and there is great interest in ways of assessing replicability, in particular, retrospectively via prior studies. The average
Heterogeneity in direct replications in psychology and its association with effect size.
TLDR
The findings show little evidence of widespread heterogeneity in direct replication studies in social and cognitive psychology, suggesting that minor changes in sample population and settings are unlikely to affect research outcomes in these fields of psychology.
Heterogeneity of Research Results: A New Perspective From Which to Assess and Promote Progress in Psychological Science
TLDR
It is argued that the reduction of heterogeneity is important for progress in psychology and its practical applications, and changes to collective research practice are suggested toward this end.
Postprint - Heterogeneity in direct replications in psychology and its association with effect size
We examined the evidence for heterogeneity (of effect sizes) when only minor changes to sample population and settings were made between studies and explored the association between heterogeneity and
Large-Scale Replication Projects in Contemporary Psychological Research
ABSTRACT Replication is complicated in psychological research because studies of a given psychological phenomenon can never be direct or exact replications of one another, and thus effect sizes vary
Heterogeneity of research results: New perspectives on psychological science
Replicability of research findings is a key issue in psychology, and has been the subject of considerable discussion by researchers. Replication is crucial to scientific principles, and underpins the
Correcting for Bias in Psychology: A Comparison of Meta-Analytic Methods
Publication bias and questionable research practices in primary research can lead to badly overestimated effects in meta-analysis. Methodologists have proposed a variety of statistical approaches to
Effect Sizes, Power, and Biases in Intelligence Research: A Meta-Meta-Analysis
TLDR
It is concluded that intelligence research does show signs of low power and publication bias, but that these problems seem less severe than in many other scientific fields.
Tilburg University Heterogeneity in direct replications in psychology and Its association with effect size Olsson-Collentine,
We examined the evidence for heterogeneity (of effect sizes) when only minor changes to sample population and settings were made between studies and explored the association between heterogeneity and
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 204 REFERENCES
What Should Researchers Expect When They Replicate Studies? A Statistical View of Replicability in Psychological Science
  • Prasad Patil, R. Peng, J. Leek
  • Psychology, Medicine
    Perspectives on psychological science : a journal of the Association for Psychological Science
  • 2016
TLDR
The results of the Reproducibility Project: Psychology can be viewed as statistically consistent with what one might expect when performing a large-scale replication experiment.
The Crisis of Confidence in Research Findings in Psychology: Is Lack of Replication the Real Problem? Or Is It Something Else?
There have been frequent expressions of concern over the supposed failure of researchers to conduct replication studies. But the large number of meta-analyses in our literatures shows that
Continuously Cumulating Meta-Analysis and Replicability
TLDR
This work presents a nontechnical introduction to the CCMA framework, and explains how it can be used to address aspects of replicability or more generally to assess quantitative evidence from numerous studies, and presents some examples and simulation results using the approach that show how the combination of evidence can yield improved results over the consideration of single studies.
Researchers’ Intuitions About Power in Psychological Research
TLDR
Survey of published research psychologists found large discrepancies between their reports of their preferred amount of power and the actual power of their studies, and recommended that researchers conduct and report formal power analyses for their studies.
Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size
TLDR
The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology.
Estimating the reproducibility of psychological science
TLDR
A large-scale assessment suggests that experimental reproducibility in psychology leaves a lot to be desired, and correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
Replications in Psychology Research
TLDR
It was found that the majority of replications in psychology journals reported similar findings to their original studies (i.e., they were successful replications), however, replications were significantly less likely to be successful when there was no overlap in authorship between the original and replicating articles.
Do Studies of Statistical Power Have an Effect on the Power of Studies?
The long-term impact of studies of statistical power is investigated using J. Cohen's (1962) pioneering work as an example. We argue that the impact is nil; the power of studies in the same journal
The N-Pact Factor: Evaluating the Quality of Empirical Journals with Respect to Sample Size and Statistical Power
TLDR
The authors show that the average sample size in social-personality research is 104 and that the power to detect the typical effect size in the field is approximately 50%, and show that there is considerable variation among journals in sample sizes and power of the studies they publish.
Statistical power of psychological research: what have we gained in 20 years?
  • J. Rossi
  • Psychology, Medicine
    Journal of consulting and clinical psychology
  • 1990
TLDR
The implications of these results concerning the proliferation of Type I errors in the published literature, the failure of replication studies, and the interpretation of null (negative) results are emphasized.
...
1
2
3
4
5
...