Assessing publication bias in meta‐analyses in the presence of between‐study heterogeneity

  title={Assessing publication bias in meta‐analyses in the presence of between‐study heterogeneity},
  author={Jaime L. Peters and Alex Sutton and David R. Jones and Keith R Abrams and Lesley Rushton and Santiago G. Moreno},
  journal={Journal of the Royal Statistical Society: Series A (Statistics in Society)},
Summary.  Between‐study heterogeneity and publication bias are common features of a meta‐analysis that can be present simultaneously. When both are suspected, consideration must be made of each in the assessment of the other. We consider extended funnel plot tests for detecting publication bias, and selection modelling and trim‐and‐fill methods to adjust for publication bias in the presence of between‐study heterogeneity. These methods are applied to two example data sets. Results indicate that… 
Detecting and adjusting for small‐study effects in meta‐analysis
Simulations and applications suggest that in the presence of strong selection both the trim-and-fill method and the Copas selection model may not fully eliminate bias, while regression-based approaches seem to be a promising alternative.
Using network meta‐analysis to evaluate the existence of small‐study effects in a network of interventions
This work suggests that network meta-regression is employed to account for small-study effects in a set of related meta-analyses and describes the methods by re-analysing two published networks.
Funnel plots may show asymmetry in the absence of publication bias with continuous outcomes dependent on baseline risk: presentation of a new publication bias test
It is demonstrated that correlation between effect estimates and standard errors produces funnel plot asymmetry in the presence of no publication bias for continuous outcomes dependent on baseline risk.
Detecting and correcting for publication bias in meta-analysis – A truncated normal distribution approach
This paper formulate publication bias as a truncated distribution problem, and proposes new parametric solutions, which perform consistently well, both in detecting and correcting publication bias under various situations.
The Effect of Publication Bias on the Q Test and Assessment of Heterogeneity
The Q test of homogeneity and heterogeneity measures H2 and I2 are generally not valid when publication bias is present, and a web application, Q-sense, is introduced, which can be used to determine the impact of publication bias on the assessment of heterogeneity within a certain meta-analysis and to assess the robustness of the meta-analytic estimate to publication bias.
Avoiding Bias in Publication Bias Research: The Value of “Null” Findings
Meta-analytic reviews are an important tool for advancing science and guiding evidence-based practice. Publication bias is one of the greatest threats to meta-analytic reviews. This paper assesses
A multivariate meta‐analysis approach for reducing the impact of outcome reporting bias in systematic reviews
Results show that the 'borrowing of strength' from a multivariate meta-analysis can reduce the impact of ORB on the pooled treatment effect estimates, and the use of the Pearson correlation is examined as a novel approach for dealing with missing within-study correlations.
Publication Bias in Recent Meta-Analyses
Introduction Positive results have a greater chance of being published and outcomes that are statistically significant have a greater chance of being fully reported. One consequence of research
A historical review of publication bias
An historical account of seminal contributions by the evidence synthesis community is offered, with an emphasis on the parallel development of graph-based and selection model approaches.
A Trim and Fill Examination of the Extent of Publication Bias in Communication Research
Publication bias can occur when a study is not published in an academic journal because the study did not find a statistically significant result. It can bias meta-analytic estimates upwards because


A comparison of methods to detect publication bias in meta‐analysis
Based on the empirical type I error rates, a regression of treatment effect on sample size, weighted by the inverse of the variance of the logit of the pooled proportion (using the marginal total) is the preferred method.
Adjusting for publication bias in the presence of heterogeneity
It is found that trim and fill may spuriously adjust for non-existent bias if (i) the variability among studies causes some precisely estimated studies to have effects far from the global mean or (ii) an inverse relationship between treatment efficacy and sample size is introduced by the studies' a priori power calculations.
Performance of the trim and fill method in the presence of publication bias and between‐study heterogeneity
Using the trim and fill method as a form of sensitivity analysis as intended by the authors of the method can help to reduce the bias in pooled estimates, even though the performance of this method is not ideal.
Assessment of regression-based methods to adjust for publication bias through a comprehensive simulation study
Regression-based adjustments for publication bias and other small study effects are easy to conduct and outperformed more established methods over a wide range of simulation scenarios.
The implications of publication bias for meta‐analysis' other parameter
  • D. Jackson
  • Psychology, Economics
    Statistics in medicine
  • 2006
Using step functions to model the bias it can be demonstrated that it is impossible to make generalizations concerning how to revise estimates of between-study variance when presented with the possibility of publication bias.
Inflation of type I error rate in two statistical tests for the detection of publication bias in meta‐analyses with binary outcomes
Two statistical tests for the detection of bias in meta-analysis with sparse data need to be developed and results indicate an inflation of type I error rates for both tests when the data are sparse.
Trim and Fill: A Simple Funnel‐Plot–Based Method of Testing and Adjusting for Publication Bias in Meta‐Analysis
These are simple rank-based data augmentation techniques, which formalize the use of funnel plots, which provide effective and relatively powerful tests for evaluating the existence of publication bias.
The appropriateness of asymmetry tests for publication bias in meta-analyses: a large survey
Background: Statistical tests for funnel-plot asymmetry are common in meta-analyses. Inappropriate application can generate misleading inferences about publication bias. We aimed to measure, in a
A modified test for small‐study effects in meta‐analyses of controlled trials with binary endpoints
A modified linear regression test for funnel plot asymmetry based on the efficient score and its variance, Fisher's information is developed, which has a false-positive rate close to the nominal level while maintaining similar power to the originallinear regression test ('Egger' test).