Accuracy of Effect Size Estimates from Published Psychological Research

  title={Accuracy of Effect Size Estimates from Published Psychological Research},
  author={Andrew Brand and Michael T. Bradley and Lisa A. Best and George Valentin Stoica},
  journal={Perceptual and Motor Skills},
  pages={645 - 649}
A Monte-Carlo simulation was used to model the biasing of effect sizes in published studies. The findings from the simulation indicate that, when a predominant bias to publish studies with statistically significant results is coupled with inadequate statistical power, there will be an overestimation of effect sizes. The consequences such an effect size overestimation will then have on meta-analyses and power analyses are highlighted and discussed along with measures which can be taken to reduce… 

Tables from this paper

Accuracy of Effect Size Estimates From Published Psychological Experiments Involving Multiple Trials
Simulations showed a large increase in observed effect size averages and the power to accept these estimates as statistically significant increased over numbers of trials or items.
Multiple Trials May Yield Exaggerated Effect Size Estimates
Through a series of Monte-Carlo simulations, this article describes the results of multiple trials or items on effect size estimates when the averages and aggregates of a dependent measure are analyzed.
Sweeping recommendations regarding effect size and sample size can miss important nuances: A comment on “A comprehensive review of reporting practices in psychological journals”
Statistical significance tests done with low statistical power levels can result in reports of exaggerated effect sizes. Funnel graphs can show these exaggerations for a given area of research by
Interpreting Effect Size Estimates through Graphic Analysis of Raw Data Distributions
This paper considers and simulate cases where graphical analyses reveal distortion in effect size estimates, and highlights the value of graphing data to interpret effect size Estimates.
The essential guide to effect sizes : statistical power, meta-analysis, and the interpretation of research results
This book discusses effect sizes, meta-Analysis, and the interpretation of results in the context of meta-analysis, which addresses the role of sample sizes in the analysis of power research.
Sample-Size Planning for More Accurate Statistical Power: A Method Adjusting Sample Effect Sizes for Publication Bias and Uncertainty
This work presents an alternative approach that adjusts sample effect sizes for bias and uncertainty, and it is demonstrated its effectiveness for several experimental designs.
The Precision of Effect Size Estimation From Published Psychological Research
Additional exploratory analyses revealed that CI widths varied across psychological research areas and thatCI widths were not discernably decreasing over time, and the theoretical implications are discussed along with ways of reducing the CI widthS and thus improving precision of effect size estimation.
Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs
A practical primer on how to calculate and report effect sizes for t-tests and ANOVA's such that effect sizes can be used in a-priori power analyses and meta-analyses and a detailed overview of the similarities and differences between within- and between-subjects designs is provided.
Alpha Values as a Function of Sample Size, Effect Size, and Power: Accuracy over Inference
It was evident that sample sizes for most psychological studies are adequate for large effect sizes defined at .8, and it was perhaps doubtful if these ideal levels of alpha and power have generally been achieved for medium effect sizes in actual research, since 170 participants would be required.


Estimating effect size: Bias resulting from the significance criterion in editorial decisions
Experiments that find larger differences between groups than actually exist in the population are more likely to pass stringent tests of significance and be published than experiments that find
Publication decisions revisited: the effect of the outcome of statistical tests on the decision to p
Evidence that published results of scientific investigations are not a representative sample of results of all scientific studies is presented and practice leading to publication bias have not changed over a period of 30 years is indicated.
Diagnosing Estimate Distortion Due to Significance Testing in Literature on Detection of Deception
A subset of studies show support for predicted small to medium effects on different physiological measures, individual differences, and condition manipulations, suggesting that effect sizes from published values of t, F, and z are exaggerations.
Finding the Missing Science : The Fate of Studies Submitted for Review by a Human Subjects Committee
Publication bias, including prejudice against the null hypothesis, and other biasing filters may operate on researchers as well as journal editors and reviewers. A survey asked 33 psychology
Do Studies of Statistical Power Have an Effect on the Power of Studies?
The long-term impact of studies of statistical power is investigated using J. Cohen's (1962) pioneering work as an example. We argue that the impact is nil; the power of studies in the same journal
Statistical power of psychological research: what have we gained in 20 years?
  • J. Rossi
  • Psychology
    Journal of consulting and clinical psychology
  • 1990
The implications of these results concerning the proliferation of Type I errors in the published literature, the failure of replication studies, and the interpretation of null (negative) results are emphasized.
To increase power in randomized clinical trials without increasing sample size.
  • H. Kraemer
  • Psychology
    Psychopharmacology bulletin
  • 1991
It is possible to increase power in RCTs in a variety of ways without increasing sample size, in essence by increasing effect size by decreasing within-group variance.
Publication Decisions and their Possible Effects on Inferences Drawn from Tests of Significance—or Vice Versa
Abstract There is some evidence that in fields where statistical tests of significance are commonly used, research which yields nonsignificant results is not published. Such research being unknown to
The persistence of underpowered studies in psychological research: causes, consequences, and remedies.
Underpowered studies persist in the psychological literature and the effects on efforts to create a cumulative science are examined and the "curse of multiplicities" plays a central role.
Statistical Power Analysis for the Behavioral Sciences
Contents: Prefaces. The Concepts of Power Analysis. The t-Test for Means. The Significance of a Product Moment rs (subscript s). Differences Between Correlation Coefficients. The Test That a