Estimating Effect Size Under Publication Bias: Small Sample Properties and Robustness of a Random Effects Selection Model

@article{Hedges1996EstimatingES,
  title={Estimating Effect Size Under Publication Bias: Small Sample Properties and Robustness of a Random Effects Selection Model},
  author={Larry V Hedges and Jack L. Vevea},
  journal={Journal of Educational Statistics},
  year={1996},
  volume={21},
  pages={299 - 332}
}
  • L. Hedges, J. Vevea
  • Published 1 December 1996
  • Mathematics
  • Journal of Educational Statistics
When there is publication bias, studies yielding large p values, and hence small effect estimates, are less likely to be published, which leads to biased estimates of effects in meta-analysis. We investigate a selection model based on one-tailed p values in the context of a random effects model. The procedure both models the selection process and corrects for the consequences of selection on estimates of the mean and variance of effect parameters. A test of the statistical significance of… 
Effect Size Estimation From t-Statistics in the Presence of Publication Bias: A Brief Review of Existing Approaches With Some Extensions
Publication bias hampers the estimation of true effect sizes. Specifically, effect sizes are systematically overestimated when studies report only significant results. In this paper we show how this
A general linear model for estimating effect size in the presence of publication bias
When the process of publication favors studies with smallp-values, and hence large effect estimates, combined estimates from many studies may be biased. This paper describes a model for estimation of
Meta-analysis using effect size distributions of only statistically significant studies.
Publication bias threatens the validity of meta-analytic results and leads to overestimation of the effect size in traditional meta-analysis. This particularly applies to meta-analyses that feature
Estimating Population Mean Power Under Conditions of Heterogeneity and Selection for Significance
In scientific fields that use significance tests, statistical power is important for successful replications of significant results because it is the long-run success rate in a series of exact
Detecting publication bias in random effects meta-analysis: An empirical comparison of statistical methods
TLDR
The overall findings indicate that publication bias notably impacts the meta-analysis effect size and variance estimates.
A Bayesian “Fill-In” Method for Correcting for Publication Bias in Meta-Analysis
TLDR
A Bayesian fill-in meta-analysis method for adjusting publication bias and estimating population effect size that accommodates different assumptions for publication bias is proposed and its performance was relatively sensitive to the assumed publication bias mechanism.
Beyond Publication Bias
This review considers several meta-regression and graphical methods that can differentiate genuine empirical effect from publication bias. Publication selection exists when editors, reviewers, or
Sensitivity methods for publication bias in a meta-analysis
TLDR
This thesis presents new methods that involve selection functions that aim to make as few strong assumptions about selection as possible, including the use of a non-parametric permutation test, and theUse of a step selection function.
How to Detect Publication Bias in Psychological Research
Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size
Using Monte Carlo experiments to select meta‐analytic estimators
TLDR
It is shown how Monte Carlo analysis of meta‐analytic estimators can be used to select estimators for specific research situations and that the size of the meta‐analyst's sample and effect heterogeneity are important determinants of relative estimator performance.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 39 REFERENCES
Modeling publication selection effects in meta-analysis
Publication selection effects arise in meta-analysis when the effect magnitude estimates are observed in (available from) only a subset of the studies that were actually conducted and the probability
Selection Models and the File Drawer Problem
TLDR
This paper uses selection models, or weighted distributions, to deal with one source of bias, namely the failure to report studies that do not yield statistically significant results, and applies selection models to two approaches that have been suggested for correcting the bias.
A general linear model for estimating effect size in the presence of publication bias
When the process of publication favors studies with smallp-values, and hence large effect estimates, combined estimates from many studies may be biased. This paper describes a model for estimation of
Estimation of Effect Size under Nonrandom Sampling: The Effects of Censoring Studies Yielding Statistically Insignificant Mean Differences
Quantitative research synthesis usually involves the combination of estimates of the standardized mean difference (effect size) derived from independent research studies. In some cases, effect size
Operating characteristics of a rank correlation test for publication bias.
An adjusted rank correlation test is proposed as a technique for identifying publication bias in a meta-analysis, and its operating characteristics are evaluated via simulations. The test statistic
Estimating effect size: Bias resulting from the significance criterion in editorial decisions
Experiments that find larger differences between groups than actually exist in the population are more likely to pass stringent tests of significance and be published than experiments that find
An Approach for Assessing Publication Bias Prior to Performing a Meta-Analysis
A semi-parametric method is developed for assessing publication bias prior to performing a meta-analysis. Summary estimates for the individual studies in the meta-analysis are assumed to have known
Publication Decisions and their Possible Effects on Inferences Drawn from Tests of Significance—or Vice Versa
Abstract There is some evidence that in fields where statistical tests of significance are commonly used, research which yields nonsignificant results is not published. Such research being unknown to
Publication bias : a problem in interpreting medical data
Publication bias, the phenomenon in which studies with positive results are more likely to be published than studies with negative results, is a serious problem in the interpretation of scientific
...
1
2
3
4
...