p-Curve and Effect Size

@article{Simonsohn2014pCurveAE,
  title={p-Curve and Effect Size},
  author={Uri Simonsohn and Leif D. Nelson and Joseph P. Simmons},
  journal={Perspectives on Psychological Science},
  year={2014},
  volume={9},
  pages={666 - 681}
}
Journals tend to publish only statistically significant evidence, creating a scientific record that markedly overstates the size of effects. We provide a new tool that corrects for this bias without requiring access to nonsignificant results. It capitalizes on the fact that the distribution of significant p values, p-curve, is a function of the true underlying effect. Researchers armed only with sample sizes and test results of the published findings can correct for publication bias. We… 

Figures and Topics from this paper

Supplemental Material for p-Hacking and Publication Bias Interact to Distort Meta-Analytic Effect Size Estimates
Science depends on trustworthy evidence. Thus, a biased scientific record is of questionable value because it impedes scientific progress, and the public receives advice on the basis of unreliable
Meta-analysis using effect size distributions of only statistically significant studies.
Publication bias threatens the validity of meta-analytic results and leads to overestimation of the effect size in traditional meta-analysis. This particularly applies to meta-analyses that feature
Conducting Meta-Analyses Based on p Values
TLDR
It is shown that in some situations, p-curve behaves erratically, whereas p-uniform may yield implausible estimates of negative effect size, which may result in overestimation of effect size under moderate-to-large heterogeneity and may yield unpredictable bias when researchers employ p-hacking.
Effect Size Estimation From t-Statistics in the Presence of Publication Bias: A Brief Review of Existing Approaches With Some Extensions
Publication bias hampers the estimation of true effect sizes. Specifically, effect sizes are systematically overestimated when studies report only significant results. In this paper we show how this
Bayesian evaluation of effect size after replicating an original study
TLDR
A Bayesian meta-analysis method called snapshot hybrid is developed that is easy to use and understand and quantifies the amount of evidence in favor of a zero, small, medium and large effect and adjusts for publication bias by taking into account that the original study is statistically significant.
Bayesian evaluation of effect size after replicating an original study
The vast majority of published results in the literature is statistically significant, which raises concerns about their reliability. The Reproducibility Project Psychology (RPP) and Experimental
p-Hacking and publication bias interact to distort meta-analytic effect size estimates.
TLDR
A large-scale simulation study is offered to elucidate how p-hacking and publication bias distort meta-analytic effect size estimates under a broad array of circumstances that reflect the reality that exists across a variety of research areas and suggests policies need to make the prevention of publication bias a top priority.
Running head: Estimating Replicability 1 Z-Curve: A Method for the Estimating Replicability Based on Test Statistics in Original Studies
In recent years, the replicability of original findings published in psychology journals has been questioned. A key concern is that selection for significance inflates observed effect sizes and
Statistical methods for replicability assessment
Large-scale replication studies like the Reproducibility Project: Psychology (RP:P) provide invaluable systematic data on scientific replicability, but most analyses and interpretations of the data
How to Detect Publication Bias in Psychological Research
Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 44 REFERENCES
Publication decisions revisited: the effect of the outcome of statistical tests on the decision to p
TLDR
Evidence that published results of scientific investigations are not a representative sample of results of all scientific studies is presented and practice leading to publication bias have not changed over a period of 30 years is indicated.
P-Curve: A Key to the File Drawer
TLDR
By telling us whether the authors can rule out selective reporting as the sole explanation for a set of findings, p-curve offers a solution to the age-old inferential problems caused by file-drawers of failed studies and analyses.
Estimating effect size: Bias resulting from the significance criterion in editorial decisions
Experiments that find larger differences between groups than actually exist in the population are more likely to pass stringent tests of significance and be published than experiments that find
A Nonparametric “Trim and Fill” Method of Accounting for Publication Bias in Meta-Analysis
Abstract Meta-analysis collects and synthesizes results from individual studies to estimate an overall effect size. If published studies are chosen, say through a literature review, then an inherent
Trim and fill: A simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis.
TLDR
These are simple rank-based data augmentation techniques, which formalize the use of funnel plots, which provide effective and relatively powerful tests for evaluating the existence of publication bias.
Publication Decisions and their Possible Effects on Inferences Drawn from Tests of Significance—or Vice Versa
Abstract There is some evidence that in fields where statistical tests of significance are commonly used, research which yields nonsignificant results is not published. Such research being unknown to
Power failure: why small sample size undermines the reliability of neuroscience
TLDR
It is shown that the average statistical power of studies in the neurosciences is very low, and the consequences include overestimates of effect size and low reproducibility of results.
Publication bias in meta-analysis: its causes and consequences.
TLDR
The design of the meta-analysis itself, and the studies included in it, are shown to be important among a number of sources of publication bias.
Do Studies of Statistical Power Have an Effect on the Power of Studies?
The long-term impact of studies of statistical power is investigated using J. Cohen's (1962) pioneering work as an example. We argue that the impact is nil; the power of studies in the same journal
Investigating variation in replicability: A “Many Labs” replication project
Although replication is a central tenet of science, direct replications are rare in psychology. This research tested variation in the replicability of 13 classic and contemporary effects across 36
...
1
2
3
4
5
...