Methods to calculate uncertainty in the estimated overall effect size from a random‐effects meta‐analysis

@article{Veroniki2018MethodsTC,
  title={Methods to calculate uncertainty in the estimated overall effect size from a random‐effects meta‐analysis},
  author={Areti Angeliki Veroniki and Dan Jackson and Ralf Bender and Oliver Kuss and Dean Langan and Julian P. T. Higgins and Guido Knapp and Georgia Salanti},
  journal={Research Synthesis Methods},
  year={2018},
  volume={10},
  pages={23 - 43}
}
Meta‐analyses are an important tool within systematic reviews to estimate the overall effect size and its confidence interval for an outcome of interest. If heterogeneity between the results of the relevant studies is anticipated, then a random‐effects model is often preferred for analysis. In this model, a prediction interval for the true effect in a new study also provides additional useful information. However, the DerSimonian and Laird method—frequently used as the default method for meta… 

A confidence interval robust to publication bias for random‐effects meta‐analysis of few studies

A variation of the method by Henmi and Copas employing an improved estimator of the between-study heterogeneity, in particular when dealing with few studies only is proposed, and it is found that the method outperforms the others in terms of coverage probabilities.

Interval estimation of the overall treatment effect in random‐effects meta‐analyses: Recommendations from a simulation study comparing frequentist, Bayesian, and bootstrap methods

The general recommendation of the Hartung-Knapp/Sidik-Jonkman (HKSJ) method is confirmed, and the Bayesian interval using a weakly informative prior for the heterogeneity may help.

Selecting the best meta-analytic estimator for evidence-based practice: a simulation study.

A simulation study was conducted to compare estimator performance and demonstrates that the IVhet and quality effects estimators, though biased, have the lowest mean squared error.

Frequentist performances of Bayesian prediction intervals for random‐effects meta‐analysis

It was found that frequentist coverage performances strongly depended on what prior distributions were adopted, and when the number of studies was smaller than 10, there were no prior distributions that retained accurate frequentist Coverage properties.

Permutation inference methods for multivariate meta‐analysis

This article provides permutation‐based inference methods that enable exact joint inferences for average outcome measures without large sample approximations and proposes effective approaches for permutation inferences using optimal weighting based on the efficient score statistic.

pimeta: an R package of prediction intervals for random-effects meta-analysis

The pimeta package is an R package that provides improved methods to calculate accurate prediction intervals and graphical tools to illustrate these results, easily performed in R using a series of R packages.

Performance of several types of beta-binomial models in comparison to standard approaches for meta-analyses with very few studies

Background Meta-analyses are used to summarise the results of several studies on a specific research question. Standard methods for meta-analyses, namely inverse variance random effects models, have

Likelihood-based random-effects meta-analysis with few studies: empirical and simulation studies

In the presence of between-study heterogeneity, especially with unbalanced study sizes, caution is needed in applying meta-analytical methods to few studies, as either coverage probabilities might be compromised, or intervals are inconclusively wide.

A bivariate likelihood approach for estimation of a pooled continuous effect size from a heteroscedastic meta-analysis study

The DerSimonian-Laird (DL) weighted average method has been widely used for estimation of a pooled effect size from an aggregated data meta-analysis study. It is mainly criticized for its

Meta-analysis Using Flexible Random-effects Distribution Models

This work proposes new random-effects meta-analysis methods using five flexible random- effects distribution models that can flexibly regulate skewness, kurtosis and tailweight and provides two examples of real-world evidence that clearly show that the normal distribution assumption is explicitly unsuitable.
...

References

SHOWING 1-10 OF 241 REFERENCES

Random effects meta‐analysis: Coverage performance of 95% confidence and prediction intervals following REML estimation

Researchers should be cautious in deriving 95% prediction intervals following a frequentist random‐effects meta‐analysis until a more reliable solution is identified, especially when there are few studies.

A comparison of statistical methods for meta‐analysis

It is shown that the commonly used DerSimonian and Laird method does not adequately reflect the error associated with parameter estimation, especially when the number of studies is small, and three methods currently used for estimation within the framework of a random effects model are considered.

A comparison of heterogeneity variance estimators in simulated random‐effects meta‐analyses

The estimated summary effect of the meta-analysis and its confidence interval derived from the Hartung-Knapp-Sidik-Jonkman method are more robust to changes in the heterogeneity variance estimate and show minimal deviation from the nominal coverage of 95% under most of the simulated scenarios.

Confidence intervals for the overall effect size in random-effects meta-analysis.

The performances of 3 alternatives to the standard CI procedure are examined under a random-effects model and 8 different tau2 estimators to estimate the weights: the t distribution CI, the weighted variance CI (with an improved variance), and the quantile approximation method (recently proposed).

Methods for evidence synthesis in the case of very few studies

The aim is to summarize possible methods to perform meaningful evidence syntheses in the situation with only very few studies and based on the existing literature on methods for meta‐analysis with veryFew studies and consensus of the authors, the Knapp‐Hartung method is recommended.

Comparative performance of heterogeneity variance estimators in meta‐analysis: a review of simulation studies

The Paule-Mandel method was recommended by three studies: it is simple to implement, is less biased than DerSimonian and Laird and performs well in meta-analyses with dichotomous and continuous outcomes.

Random-Effects Meta-analysis of Inconsistent Effects: A Time for Change

The decision to calculate a summary estimate in a meta-analysis should be based on clinical judgment, the number of studies, and the degree of variation among studies, as well as on a random-effects model that incorporates study-to-study variability beyond what would be expected by chance.

Confidence intervals for the amount of heterogeneity in meta‐analysis

A novel method for constructing confidence intervals for the amount of heterogeneity in the effect sizes is proposed that guarantees nominal coverage probabilities even in small samples when model assumptions are satisfied and yields the most accurate coverage probabilities under conditions more analogous to practice.

Confidence intervals for random effects meta‐analysis and robustness to publication bias

A new confidence interval is proposed that has better coverage than the DerSimonian-Laird method, and that is less sensitive to publication bias, and is centred on a fixed effects estimate, but allow for heterogeneity by including an assessment of the extra uncertainty induced by the random effects setting.

Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†

The aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them and recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐ study variance statistic’.
...