Markov Chain Generative Adversarial Neural Networks for Solving Bayesian Inverse Problems in Physics Applications

  title={Markov Chain Generative Adversarial Neural Networks for Solving Bayesian Inverse Problems in Physics Applications},
  author={Nikolaj Takata M{\"u}cke and Benjamin Sanderse and Sander M. Boht'e and Cornelis W. Oosterlee},
In the context of solving inverse problems for physics applications within a Bayesian framework, we present a new approach, Markov Chain Generative Adversarial Neural Networks (MCGANs), to alleviate the computational costs associated with solving the Bayesian inference problem. GANs pose a very suitable framework to aid in the solution of Bayesian inference problems, as they are designed to generate samples from complicated high-dimensional distributions. By training a GAN to sample from a low… 
2 Citations

Generative models and Bayesian inversion using Laplace approximation

This work explores an alternative Bayesian inference based on probabilistic generative models which is carried out in the original high-dimensional space and shows that derived Bayes estimates are consistent, in contrast to the approach employing the low-dimensional manifold of the generative model.



Bayesian Inference in Physics-Driven Problems with Adversarial Priors

The use of GANs as priors in physics-driven Bayesian inference problems is considered, and the weak convergence of the approximate prior to the true prior is analyzed.

Solving Bayesian Inverse Problems via Variational Autoencoders

This work introduces UQ-VAE: a flexible, adaptive, hybrid data/model-informed framework for training neural networks capable of rapid modelling of the posterior distribution representing the unknown parameter of interest and includes an adjustable hyperparameter that allows selection of the notion of distance between the posterior model and the target distribution.

Composing Normalizing Flows for Inverse Problems

This work proposes a framework for approximate inference that estimates the target conditional as a composition of two flow models that leads to a stable variational inference training procedure that avoids adversarial training.

Bayesian multiscale deep generative model for the solution of high-dimensional inverse problems

Auto-Encoding Variational Bayes

A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.

An introduction to deep generative modeling

DGMs are introduced and a concise mathematical framework for modeling the three most popular approaches: normalizing flows, variational autoencoders, and generative adversarial networks is provided, which illustrates the advantages and disadvantages of these basic approaches using numerical experiments.

Variational Inference with Normalizing Flows

It is demonstrated that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.

On the local Lipschitz stability of Bayesian inverse problems

In this note we consider the stability of posterior measures occurring in Bayesian inference w.r.t. perturbations of the prior measure and the log-likelihood function. This extends the well-posedness

A Survey on Generative Adversarial Networks: Variants, Applications, and Training

This work surveys several training solutions proposed by different researchers to stabilize GAN training, and surveys the original GAN model and its modified classical versions, and detail analysis of various GAN applications in different domains.

Approximation and Convergence Properties of Generative Adversarial Learning

It is shown that if the objective function is an adversarial divergence with some additional conditions, then using a restricted discriminator family has a moment-matching effect, thus generalizing previous results.