p-Curve and Effect Size

@article{Simonsohn2014pCurveAE,
  title={p-Curve and Effect Size},
  author={Uri Simonsohn and Leif D. Nelson and Joseph P. Simmons},
  journal={Perspectives on Psychological Science},
  year={2014},
  volume={9},
  pages={666 - 681}
}
Journals tend to publish only statistically significant evidence, creating a scientific record that markedly overstates the size of effects. We provide a new tool that corrects for this bias without requiring access to nonsignificant results. It capitalizes on the fact that the distribution of significant p values, p-curve, is a function of the true underlying effect. Researchers armed only with sample sizes and test results of the published findings can correct for publication bias. We… 

Figures from this paper

Supplemental Material for p-Hacking and Publication Bias Interact to Distort Meta-Analytic Effect Size Estimates

Science depends on trustworthy evidence. Thus, a biased scientific record is of questionable value because it impedes scientific progress, and the public receives advice on the basis of unreliable

Effect Size Estimation From t-Statistics in the Presence of Publication Bias: A Brief Review of Existing Approaches With Some Extensions

Publication bias hampers the estimation of true effect sizes. Specifically, effect sizes are systematically overestimated when studies report only significant results. In this paper we show how this

Bayesian evaluation of effect size after replicating an original study

The vast majority of published results in the literature is statistically significant, which raises concerns about their reliability. The Reproducibility Project Psychology (RPP) and Experimental

p-Hacking and publication bias interact to distort meta-analytic effect size estimates.

A large-scale simulation study is offered to elucidate how p-hacking and publication bias distort meta-analytic effect size estimates under a broad array of circumstances that reflect the reality that exists across a variety of research areas and suggests policies need to make the prevention of publication bias a top priority.

Running head: Estimating Replicability 1 Z-Curve: A Method for the Estimating Replicability Based on Test Statistics in Original Studies

In recent years, the replicability of original findings published in psychology journals has been questioned. A key concern is that selection for significance inflates observed effect sizes and

How to Detect Publication Bias in Psychological Research

Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size

Examining publication bias—a simulation-based evaluation of statistical tests on publication bias

The FAT is recommended as a test for publication bias in standard meta-analyses with no or only small effect heterogeneity, and the TES is the first alternative to the FAT, which may be found if two-sided publication bias is suspected as well as under p-hacking.

A historical review of publication bias

An historical account of seminal contributions by the evidence synthesis community is offered, with an emphasis on the parallel development of graph-based and selection model approaches.

Evidence of Experimental Bias in the Life Sciences: Why We Need Blind Data Recording

Evidence that blind protocols are uncommon in the life sciences and that nonblind studies tend to report higher effect sizes and more significant p-values is found using text mining and a literature review.

Interpreting t-Statistics Under Publication Bias: Rough Rules of Thumb

Introduction A key issue is how to interpret t-statistics when publication bias is present. In this paper we propose a set of rough rules of thumb to assist readers to interpret t-values in published
...

References

SHOWING 1-10 OF 45 REFERENCES

Publication decisions revisited: the effect of the outcome of statistical tests on the decision to p

Evidence that published results of scientific investigations are not a representative sample of results of all scientific studies is presented and practice leading to publication bias have not changed over a period of 30 years is indicated.

P-Curve: A Key to the File Drawer

By telling us whether the authors can rule out selective reporting as the sole explanation for a set of findings, p-curve offers a solution to the age-old inferential problems caused by file-drawers of failed studies and analyses.

Estimating effect size: Bias resulting from the significance criterion in editorial decisions

Experiments that find larger differences between groups than actually exist in the population are more likely to pass stringent tests of significance and be published than experiments that find

A Nonparametric “Trim and Fill” Method of Accounting for Publication Bias in Meta-Analysis

Abstract Meta-analysis collects and synthesizes results from individual studies to estimate an overall effect size. If published studies are chosen, say through a literature review, then an inherent

Publication Decisions and their Possible Effects on Inferences Drawn from Tests of Significance—or Vice Versa

Abstract There is some evidence that in fields where statistical tests of significance are commonly used, research which yields nonsignificant results is not published. Such research being unknown to

Trim and Fill: A Simple Funnel‐Plot–Based Method of Testing and Adjusting for Publication Bias in Meta‐Analysis

These are simple rank-based data augmentation techniques, which formalize the use of funnel plots, which provide effective and relatively powerful tests for evaluating the existence of publication bias.

Power failure: why small sample size undermines the reliability of neuroscience

It is shown that the average statistical power of studies in the neurosciences is very low, and the consequences include overestimates of effect size and low reproducibility of results.

Publication bias in meta-analysis: its causes and consequences.

Do Studies of Statistical Power Have an Effect on the Power of Studies?

The long-term impact of studies of statistical power is investigated using J. Cohen's (1962) pioneering work as an example. We argue that the impact is nil; the power of studies in the same journal

Investigating Variation in Replicability: A “Many Labs” Replication Project

Although replication is a central tenet of science, direct replications are rare in psychology. This research tested variation in the replicability of thirteen classic and contemporary effects across