Estimating the Difference Between Published and Unpublished Effect Sizes

@article{Polanin2016EstimatingTD,
  title={Estimating the Difference Between Published and Unpublished Effect Sizes},
  author={Joshua R. Polanin and Emily E. Tanner‐Smith and Emily Alden Hennessy},
  journal={Review of Educational Research},
  year={2016},
  volume={86},
  pages={207 - 236}
}
Practitioners and policymakers rely on meta-analyses to inform decision making around the allocation of resources to individuals and organizations. It is therefore paramount to consider the validity of these results. A well-documented threat to the validity of research synthesis results is the presence of publication bias, a phenomenon where studies with large and/or statistically significant effects, relative to studies with small or null effects, are more likely to be published. We… 

Figures from this paper

Do Published Studies Yield Larger Effect Sizes than Unpublished Studies in Education and Special Education? A Meta-review

Meta-analyses are used to make educational decisions in policy and practice. Publication bias refers to the extent to which published literature is more likely to have statistically significant

Publication Bias in Special Education Meta-Analyses

Publication bias involves the disproportionate representation of studies with large and significant effects in the published research. Among other problems, publication bias results in inflated

Evaluation of publication bias in response interruption and redirection: A meta-analysis.

An empirical evaluation of publication bias on an evidence-based ABA intervention for children diagnosed with autism spectrum disorder, response interruption and redirection finds that RIRD appears to be an effective intervention for challenging behavior maintained by nonsocial consequences.

Neglect of publication bias compromises meta-analyses of educational research

The results show that meta-analyses usually neglect publication bias adjustment, and it is argued that appropriate state-of-the-art adjustment should be attempted by default, yet one needs to take into account the uncertainty inherent in any meta-analytic inference under bias.

Comparing meta-analyses and preregistered multiple-laboratory replication projects

It is found that meta-analytic effect sizes are significantly different from replication effect sizes for 12 out of the 15 meta-replication pairs, and meta-analyses overestimate effect sizes by a factor of almost three.

Transparency and Reproducibility of Meta-Analyses in Psychology: A Meta-Review

It is argued that the field of psychology and research synthesis in general should require review authors to report these elements in a transparent and reproducible manner.

Testing for funnel plot asymmetry of standardized mean differences

This study examines problems that occur in meta-analyses of the standardized mean difference, a ubiquitous effect size measure in educational and psychological research, and assesses the Type I error rates of conventional tests of funnel plot asymmetry, as well as the likelihood ratio test from a three-parameter selection model.

Null Effects and Publication Bias in Special Education Research

Researchers sometimes conduct a study and find that the predicted relation between variables did not exist or that the intervention did not have a positive impact on student outcomes; these are

A study of meta-analyses reporting quality in the large and expanding literature of educational technology

As the empirical literature in educational technology continues to grow, meta-analyses are increasingly being used to synthesise research to inform practice. However, not all meta-analyses are equal.
...

References

SHOWING 1-10 OF 131 REFERENCES

Sample Sizes and Effect Sizes are Negatively Correlated in Meta-Analyses: Evidence and Implications of a Publication Bias Against NonSignificant Findings

Meta-analysis involves cumulating effects across studies in order to qualitatively summarize existing literatures. A recent finding suggests that the effect sizes reported in meta-analyses may be

Publication bias in psychological science: prevalence, methods for identifying and controlling, and implications for the use of meta-analyses.

Meta-analyses that included unpublished studies were more likely to show bias than those that did not, likely due to selection bias in unpublished literature searches, and sources of publication bias and implications for the use of meta-analysis are discussed.

Bias in meta-analysis detected by a simple, graphical test

Funnel plots, plots of the trials' effect estimates against sample size, are skewed and asymmetrical in the presence of publication bias and other biases Funnel plot asymmetry, measured by regression analysis, predicts discordance of results when meta-analyses are compared with single large trials.

An exploratory test for an excess of significant findings

A test to explore biases stemming from the pursuit of nominal statistical significance was developed and demonstrated a clear or possible excess of significant studies in 6 of 8 large meta-analyses and in the wide domain of neuroleptic treatments.

Why Most Published Research Findings Are False

Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true.

Robust variance estimation in meta‐regression with dependent effect size estimates

This paper provides an estimator of the covariance matrix of meta-regression coefficients that is applicable when there are clusters of internally correlated estimates and demonstrates that the meta- Regression coefficients are consistent and asymptotically normally distributed and that the robust variance estimator is valid even when the covariates are random.

Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias

There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported.

Quantifying heterogeneity in a meta‐analysis

It is concluded that H and I2, which can usually be calculated for published meta-analyses, are particularly useful summaries of the impact of heterogeneity, and one or both should be presented in publishedMeta-an analyses in preference to the test for heterogeneity.

Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles.

The reporting of trial outcomes is not only frequently incomplete but also biased and inconsistent with protocols and Published articles, as well as reviews that incorporate them, may therefore be unreliable and overestimate the benefits of an intervention.
...