Beyond Publication Bias

@article{Stanley2005BeyondPB,
  title={Beyond Publication Bias},
  author={T. D. Stanley},
  journal={Microeconomic Theory eJournal},
  year={2005}
}
  • T. Stanley
  • Published 1 July 2005
  • Economics
  • Microeconomic Theory eJournal
This review considers several meta-regression and graphical methods that can differentiate genuine empirical effect from publication bias. Publication selection exists when editors, reviewers, or researchers have a preference for statistically significant results. Because all areas of empirical research are susceptible to publication selection, any average or tally of significant/insignificant studies is likely to be biased and potentially misleading. Meta-regression analysis can see through… 
Meta-Regression Methods for Detecting and Estimating Empirical Effects in the Presence of Publication Selection
TLDR
This study investigates the small‐sample performance of meta‐regression methods for detecting and estimating genuine empirical effects in research literatures tainted by publication selection and finds them to be robust against publication selection.
The conditional nature of publication bias: a meta-regression analysis
Abstract Publication bias is pervasive in social and behavioral sciences because journals and scholars tend to reward and be rewarded for statistically significant findings. However, the determinants
Detecting publication selection bias through excess statistical significance
TLDR
Simulations show that these excess statistical significance tests often outperform the conventional Egger test for publication selection bias and the three-parameter selection model.
What Fuels Publication Bias?
Summary Significance tests were originally developed to enable more objective evaluations of research results. Yet the strong orientation towards statistical significance encourages biased results, a
Meta‐regression approximations to reduce publication selection bias
TLDR
A quadratic approximation without a linear term, precision-effect estimate with standard error (PEESE), is shown to have the smallest bias and mean squared error in most cases and to outperform conventional meta-analysis estimators, often by a great deal.
Identifying genuine effects in observational research by means of meta-regressions
TLDR
It is shown that the meta-regression models here pro- posed systematically outperform the prior gold standard of meta- Regression analysis of re- gression coefficients.
A kinked meta‐regression model for publication bias correction
TLDR
This paper proposes the Endogenous Kink (EK) meta-regression model as a novel method of publication bias correction, with a kink at the cutoff value of the standard error below which publication selection is unlikely.
The Identification and Prevention of Publication Bias in the Social Sciences and Economics
Systematic research reviews have become essential in all empirical sciences. However, the validity of research syntheses is threatened by the fact that not all studies on a given topic can be
The Identification and Prevention of Publication Bias in the Social Sciences and Economics
Summary Systematic research reviews have become essential in all empirical sciences. However, the validity of research syntheses is threatened by the fact that not all studies on a given topic can be
Publication Selection Bias in Minimum-Wage Research? A Meta-Regression Analysis
Card and Krueger’s (1995a) meta-analysis of the employment effects of minimum wages challenged existing theory. Unfortunately, their meta-analysis confused publication selection with the absence of a
...
...

References

SHOWING 1-10 OF 71 REFERENCES
Publication and related bias in meta-analysis: power of statistical tests and prevalence in the literature.
Modeling publication selection effects in meta-analysis
Publication selection effects arise in meta-analysis when the effect magnitude estimates are observed in (available from) only a subset of the studies that were actually conducted and the probability
Selection Models and the File Drawer Problem
TLDR
This paper uses selection models, or weighted distributions, to deal with one source of bias, namely the failure to report studies that do not yield statistically significant results, and applies selection models to two approaches that have been suggested for correcting the bias.
Estimating Effect Size Under Publication Bias: Small Sample Properties and Robustness of a Random Effects Selection Model
When there is publication bias, studies yielding large p values, and hence small effect estimates, are less likely to be published, which leads to biased estimates of effects in meta-analysis. We
A comparison of methods to detect publication bias in meta-analysis.
TLDR
Based on the empirical type I error rates, a regression of treatment effect on sample size, weighted by the inverse of the variance of the logit of the pooled proportion (using the marginal total) is the preferred method.
Operating characteristics of a rank correlation test for publication bias.
An adjusted rank correlation test is proposed as a technique for identifying publication bias in a meta-analysis, and its operating characteristics are evaluated via simulations. The test statistic
Empirical assessment of effect of publication bias on meta-analyses
Abstract Objective: To assess the effect of publication bias on the results and conclusions of systematic reviews and meta-analyses. Design: Analysis of published meta-analyses by trim and fill
Publication Bias: The "File-Drawer" Problem in Scientific Inference
Publication bias arises whenever the probability that a study is published depends on the statistical significance of its results. This bias, often called the file-drawer effect since the unpublished
Publication Decisions and their Possible Effects on Inferences Drawn from Tests of Significance—or Vice Versa
Abstract There is some evidence that in fields where statistical tests of significance are commonly used, research which yields nonsignificant results is not published. Such research being unknown to
Publication bias : a problem in interpreting medical data
Publication bias, the phenomenon in which studies with positive results are more likely to be published than studies with negative results, is a serious problem in the interpretation of scientific
...
...