Robust Expected Information Gain for Optimal Bayesian Experimental Design Using Ambiguity Sets

@article{Go2022RobustEI,
  title={Robust Expected Information Gain for Optimal Bayesian Experimental Design Using Ambiguity Sets},
  author={Jinwook Go and Tobin Isaac},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.09914}
}
The ranking of experiments by expected information gain (EIG) in Bayesian experimental design is sensitive to changes in the model’s prior distribution, and the approximation of EIG yielded by sampling will have errors similar to the use of a perturbed prior. We define and analyze robust expected information gain (REIG), a modification of the objective in EIG maximization by minimiz-ing an affine relaxation of EIG over an ambiguity set of distributions that are close to the original prior in KL… 

Figures from this paper

References

SHOWING 1-10 OF 20 REFERENCES
Bayesian Experimental Design for Implicit Models by Mutual Information Neural Estimation
TLDR
It is shown that training a neural network to maximise a lower bound on MI allows us to jointly determine the optimal design and the posterior and gracefully extends Bayesian experimental design for implicit models to higher design dimensions.
Bayesian Distributionally Robust Optimization
TLDR
The strong exponential consistency of the Bayesian posterior distribution and subsequently the convergence of objective functions and optimal solutions of Bayesian-DRO are shown.
Robust Bayesian Experimental Designs in Normal Linear Models
We address the problem of finding a design that minimizes the Bayes risk with respect to a fixed prior subject to being robust with respect to misspecification of the prior. Uncertainty in the prior
Variational Bayesian Optimal Experimental Design
TLDR
This work introduces several classes of fast EIG estimators by building on ideas from amortized variational inference, and shows theoretically and empirically that these estimators can provide significant gains in speed and accuracy over previous approaches.
A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments
We introduce a fully stochastic gradient based approach to Bayesian optimal experimental design (BOED). Our approach utilizes variational lower bounds on the expected information gain (EIG) of an
Accelerated Bayesian Experimental Design for Chemical Kinetic Models
TLDR
A general Bayesian framework for optimal experimental design with nonlinear simulation-based models is proposed and polynomial chaos expansions are introduced to capture the dependence of observables on model parameters and on design conditions.
Data-Driven Stochastic Programming Using Phi-Divergences
TLDR
This tutorial presents two-stage models with distributional uncertainty using phi-divergences and ties them to risk-averse optimization and examines the value of collecting additional data.
A Review of Modern Computational Algorithms for Bayesian Optimal Design
TLDR
A general overview on the concepts involved in Bayesian experimental design is provided, and some of the more commonly used Bayesian utility functions and methods for their estimation are described, as well as a number of algorithms used to search over the design space to find the Bayesian optimal design.
Estimating Expected Information Gains for Experimental Designs With Application to the Random Fatigue-Limit Model
TLDR
Some properties of estimators of expected information gains based on Markov chain Monte Carlo (MCMC) and Laplacian approximations are discussed and some issues that arise when applying these methods to the problem of experimental design in the (technically nontrivial) random fatigue-limit model of Pascual and Meeker are investigated.
On the Brittleness of Bayesian Inference
TLDR
It is reported that, although Bayesian methods are robust when the number of possible outcomes is finite or when only a finite number of marginals of the data-generating distribution are unknown, they could be generically brittle when applied to continuous systems with finite information.
...
...