Quasi Black-Box Variational Inference with Natural Gradients for Bayesian Learning

@article{Magris2022QuasiBV,
  title={Quasi Black-Box Variational Inference with Natural Gradients for Bayesian Learning},
  author={Martin Magris and Mostafa Shabani and Alexandros Iosifidis},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.11568}
}
We develop an optimization algorithm suitable for Bayesian learning in complex models. Our approach relies on natural gradient updates within a general black-box framework for efficient training with limited model-specific derivations. It applies within the class of exponential-family variational posterior distributions, for which we extensively discuss the Gaussian case for which the updates have a rather simple form. Our Quasi Black-box Variational Inference (QBVI) framework is readily… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 42 REFERENCES
Black Box Variational Inference
TLDR
This paper presents a "black box" variational inference algorithm, one that can be quickly applied to many models with little additional derivation, based on a stochastic optimization of the variational objective where the noisy gradient is computed from Monte Carlo samples from the Variational distribution.
Fast yet Simple Natural-Gradient Descent for Variational Inference in Complex Models
  • M. E. Khan, Didrik Nielsen
  • Computer Science
    2018 International Symposium on Information Theory and Its Applications (ISITA)
  • 2018
TLDR
It is shown how to derive fast yet simple natural-gradient updates by using a duality associated with exponential-family distributions, which can improve convergence by exploiting the information geometry of the solutions.
Automatic Variational Inference in Stan
TLDR
An automatic variational inference algorithm, automatic differentiation Variational inference (ADVI), which is implemented in Stan, a probabilistic programming system and can be used on any model the authors write in Stan.
Variational Inference: A Review for Statisticians
TLDR
Variational inference (VI), a method from machine learning that approximates probability densities through optimization, is reviewed and a variant that uses stochastic optimization to scale up to massive data is derived.
Variational Bayes with synthetic likelihood
TLDR
This article develops alternatives to Markov chain Monte Carlo implementations of Bayesian synthetic likelihoods with reduced computational overheads, using stochastic gradient variational inference methods for posterior approximation in the synthetic likelihood context, employing unbiased estimates of the log likelihood.
Handling the Positive-Definite Constraint in the Bayesian Learning Rule
TLDR
This work makes it easier to apply the Bayesian learning rule in the presence of positive-definite constraints in parameter spaces, and proposes a principled way to derive Riemannian gradients and retractions from scratch.
Practical Deep Learning with Bayesian Principles
TLDR
This work enables practical deep learning while preserving benefits of Bayesian principles, and applies techniques such as batch normalisation, data augmentation, and distributed training to achieve similar performance in about the same number of epochs as the Adam optimiser.
On Using Control Variates with Stochastic Approximation for Variational Bayes and its Connection to Stochastic Linear Regression
TLDR
This note derives the ideal set of control Variates for stochastic approximation variational Bayes under a certain set of assumptions and shows that using these control variates is closely related to using the stochastically linear regression approximation technique the authors proposed earlier.
Bayesian Deep Net GLM and GLMM
TLDR
This work describes flexible versions of generalized linear and generalized linear mixed models incorporating basis functions formed by a DFNN, and proposes natural gradient methods for the optimization, exploiting the factor structure of the variational covariance matrix in computation of the natural gradient.
...
...