Performance of statistical methods for meta-analysis when true study effects are non-normally distributed: A simulation study

@article{Kontopantelis2012PerformanceOS,
  title={Performance of statistical methods for meta-analysis when true study effects are non-normally distributed: A simulation study},
  author={Evangelos Kontopantelis and David Reeves},
  journal={Statistical Methods in Medical Research},
  year={2012},
  volume={21},
  pages={409 - 426}
}
Meta-analysis (MA) is a statistical methodology that combines the results of several independent studies considered by the analyst to be ‘combinable’. The simplest approach, the fixed-effects (FE) model, assumes the true effect to be the same in all studies, while the random-effects (RE) family of models allows the true effect to vary across studies. However, all methods are only correct asymptotically, while some RE models assume that the true effects are normally distributed. In practice, MA… 

Figures and Tables from this paper

Performance of statistical methods for meta-analysis when true study effects are non-normally distributed: A comparison between DerSimonian–Laird and restricted maximum likelihood

Examination of the performance of the iterative REML approach and the non-iterative DerSimonian-Laird approach in terms of coverage, for normally distributed effects only, found that results for the two methods were similar.

Random-effects meta-analysis: the number of studies matters

The overall recommendation is to avoid the DerSimonian and Laird method when the number of meta-analysis studies is modest and prefer a more comprehensive procedure that compares alternative inferential approaches.

Interval estimation of the overall treatment effect in random‐effects meta‐analyses: Recommendations from a simulation study comparing frequentist, Bayesian, and bootstrap methods

The general recommendation of the Hartung-Knapp/Sidik-Jonkman (HKSJ) method is confirmed, and the Bayesian interval using a weakly informative prior for the heterogeneity may help.

Estimation of an overall standardized mean difference in random‐effects meta‐analysis if the distribution of random effects departs from normal

This study examines the performance of various random-effects methods for computing an average effect size estimate and a confidence interval around it, when the normality assumption is not met, suggesting that Hartung's profile likelihood methods yielding the best performance under suboptimal conditions.

A comparison of heterogeneity variance estimators in simulated random‐effects meta‐analyses

The estimated summary effect of the meta-analysis and its confidence interval derived from the Hartung-Knapp-Sidik-Jonkman method are more robust to changes in the heterogeneity variance estimate and show minimal deviation from the nominal coverage of 95% under most of the simulated scenarios.

Methods to calculate uncertainty in the estimated overall effect size from a random‐effects meta‐analysis

This paper aims to provide a comprehensive overview of available methods for calculating point estimates, confidence intervals, and prediction intervals for the overall effect size under the random‐effects model, and indicates whether some methods are preferable than others by considering the results of comparative simulation and real‐life data studies.

Selecting the best meta-analytic estimator for evidence-based practice: a simulation study.

A simulation study was conducted to compare estimator performance and demonstrates that the IVhet and quality effects estimators, though biased, have the lowest mean squared error.

Robust Models for Accommodating Outliers in Random Effects Meta Analysis: A Simulation Study and Empirical Study

The simulation showed that the performance of the alternative distributions is better than the normal distribution for a number of scenarios, particularly for extreme outliers and high heterogeneity.

Estimating the Heterogeneity Variance in a Random-Effects Meta-Analysis

In a meta-analysis, differences in the design and conduct of studies may cause variation in effects beyond what is expected from chance alone. This additional variation is commonly known as

A comparison of 20 heterogeneity variance estimators in statistical synthesis of results from studies: a simulation study

Heterogeneity estimators are identified that perform better than the suggested Paule-Mandel estimator and maximum likelihood provides the best performance for both types of outcome in the absence of heterogeneity.
...

References

SHOWING 1-10 OF 35 REFERENCES

A comparison of statistical methods for meta-analysis.

It is shown that the commonly used DerSimonian and Laird method does not adequately reflect the error associated with parameter estimation, especially when the number of studies is small, and three methods currently used for estimation within the framework of a random effects model are considered.

Valid Inference in Random Effects Meta‐Analysis

Permutation and ad hoc methods for testing with the random effects model, which theoretically controls the type I error rate for typical meta-analyses scenarios, are proposed.

Incorporating variability in estimates of heterogeneity in the random effects model in meta-analysis.

A simple form for the variance of Cochran's homogeneity statistic Q is developed, leading to interval estimation of tau 2 utilizing an approximating distribution for Q; this enables the point estimation of DerSimonian and Laird to be extended.

A likelihood approach to meta-analysis with random effects.

It is concluded that likelihood based methods are preferred to the standard method in undertaking random effects meta-analysis when the value of sigma B2 has an important effect on the overall estimated treatment effect.

Detecting and describing heterogeneity in meta-analysis.

It is concluded that the test of heterogeneity should not be the sole determinant of model choice in meta-analysis, and inspection of relevant normal plots, as well as clinical insight, may be more relevant to both the investigation and modelling of heterogeneity.

Evaluation of old and new tests of heterogeneity in epidemiologic meta-analysis.

The results show that the asymptotic DerSimonian and Laird Q statistic and the bootstrap versions of the other tests give the correct type I error under the null hypothesis but that all of the tests considered have low statistical power, especially when the number of studies included in the meta-analysis is small.

Random-effects meta-analyses are not always conservative.

The authors give an example from a meta-analysis of water chlorination and cancer in which the random-effects summaries are less conservative in both of these alternative senses and possibly more biased than the fixed- effects summaries.

A simulation study comparing properties of heterogeneity measures in meta‐analyses

Heterogeneity test and measures are quantifications of the impact of heterogeneity on the meta-analysis result as both depend on the variance of the individual study effects and thus on the number of patients in the studies.

A simple confidence interval for meta‐analysis

This paper discusses an alternative simple approach for constructing the confidence interval, based on the t-distribution, which has improved coverage probability and is easy to calculate, and unlike some methods suggested in the statistical literature, no iterative computation is required.

Quantifying heterogeneity in a meta‐analysis

It is concluded that H and I2, which can usually be calculated for published meta-analyses, are particularly useful summaries of the impact of heterogeneity, and one or both should be presented in publishedMeta-an analyses in preference to the test for heterogeneity.