Bayesian model‐averaged meta‐analysis in medicine

  title={Bayesian model‐averaged meta‐analysis in medicine},
  author={Franti{\vs}ek Barto{\vs} and Quentin F. Gronau and Bram Timmers and Willem M. Otte and Alexander Ly and Eric-Jan Wagenmakers},
  journal={Statistics in Medicine},
  pages={6743 - 6761}
We outline a Bayesian model‐averaged (BMA) meta‐analysis for standardized mean differences in order to quantify evidence for both treatment effectiveness δ and across‐study heterogeneity τ . We construct four competing models by orthogonally combining two present‐absent assumptions, one for the treatment effect and one for across‐study heterogeneity. To inform the choice of prior distributions for the model parameters, we used 50% of the Cochrane Database of Systematic Reviews to specify rival… 

Robust Bayesian meta‐analysis: Model‐averaging across complementary publication bias adjustment methods

Robust Bayesian meta‐analysis and model‐average are extended across two prominent approaches of adjusting for publication bias: selection models of p‐values and models adjusting for small‐study effects.

Informed Bayesian survival analysis

Background We provide an overview of Bayesian estimation, hypothesis testing, and model-averaging and illustrate how they benefit parametric survival analysis. We contrast the Bayesian framework to

Adjusting for Publication Bias in JASP and R: Selection Models, PET-PEESE, and Robust Bayesian Meta-Analysis

This tutorial demonstrates how to conduct a publication-bias-adjusted meta-analysis in JASP and R and introduces robust Bayesian meta- analysis, a Bayesian approach that simultaneously considers both PET-PEESE and selection models.

Estimating the false discovery risk of (randomized) clinical trials in medical journals based on published p-values

A new way to estimate the false positive risk is proposed and the method is applied to the results of (randomized) clinical trials in top medical journals, providing a solid empirical foundation for evaluations of the trustworthiness of medical research.

Prior knowledge elicitation: The past, present, and future

This work analyzes the state of the art of prior elicitation by identifying a range of key aspects of prior knowledge elicitation, from properties of the modelling task and the nature of the priors to the form of interaction with the expert.

Bayes factors and posterior estimation: Two sides of the very same coin

Recently, several researchers have claimed that conclusions obtained from a Bayes factor may contradict those obtained from Bayesian estimation. In this short paper, we wish to point out that no such

Association of Funisitis with Short-Term Outcomes of Prematurity: A Frequentist and Bayesian Meta-Analysis

The data suggest that the presence of funisitis does not add an additional risk to preterm birth when compared to chorioamnionitis in the absence of fetal inflammatory response.

Big little lies: a compendium and simulation of p-hacking strategies

This work compiles a list of 12 p-hacking strategies based on an extensive literature review, identifies factors that control their level of severity, and demonstrates their impact on false-positive rates using simulation studies.

Designing translational animal experiments by Bayesian MAP approaches




Implementing informative priors for heterogeneity in meta‐analysis using meta‐regression and pseudo data

This work presents a method for performing Bayesian meta‐analysis using data augmentation, in which an informative conjugate prior is represented for between‐study variance by pseudo data and use meta‐regression for estimation, and derive predictive inverse‐gamma distributions for the between-study variance expected in future meta‐analyses.

An informed reference prior for between‐study heterogeneity in meta‐analyses of binary outcomes

The distribution of between-study variance in published meta-analyses is described, and some realistic, informed priors for use in meta-Analyses of binary outcomes are proposed to improve the calibration of inferences from Bayesian meta-analysis.

A re-evaluation of random-effects meta-analysis

It is suggested that random-effects meta-analyses as currently conducted often fail to provide the key results, and the extent to which distribution-free, classical and Bayesian approaches can provide satisfactory methods is investigated.

Avoiding zero between‐study variance estimates in random‐effects meta‐analysis

Bayes modal estimation performs well by avoiding boundary estimates; having smaller root mean squared error for the between-study standard deviation; and having better coverage for the overall effects than the other methods when the true model has at least a small or moderate amount of unexplained heterogeneity.

A comparison of statistical methods for meta‐analysis

It is shown that the commonly used DerSimonian and Laird method does not adequately reflect the error associated with parameter estimation, especially when the number of studies is small, and three methods currently used for estimation within the framework of a random effects model are considered.

Bayesian methods in meta-analysis and evidence synthesis

The Bayesian methods discussed are illustrated by means of a meta-analysis examining the evidence relating to electronic fetal heart rate monitoring and perinatal mortality in which evidence is available from a variety of sources.

Predicting the extent of heterogeneity in meta-analysis, using empirical data from the Cochrane Database of Systematic Reviews

Meta-analysis characteristics were strongly associated with the degree of between-study heterogeneity, and predictive distributions for heterogeneity differed substantially across settings, which will be very beneficial in future meta-analyses including few studies.

A Primer on Bayesian Model-Averaged Meta-Analysis

Bayesian model-averaged meta-analysis therefore avoids the need to select either a fixed-effect or random-effects model and instead takes into account model uncertainty in a principled manner.

Robust Bayesian meta-analysis: Addressing publication bias with model-averaging.

It is demonstrated that RoBMA finds evidence for the absence of publication bias in Registered Replication Reports and reliably avoids false positives and is relatively robust to model misspecification and simulations show that it outperforms existing methods.