• Corpus ID: 59606164

Asymptotic Consistency of $\alpha-$R\'enyi-Approximate Posteriors.

@article{Jaiswal2019AsymptoticCO,
  title={Asymptotic Consistency of \$\alpha-\$R\'enyi-Approximate Posteriors.},
  author={Prateek Jaiswal and Vinayak A. Rao and Harsha Honnappa},
  journal={arXiv: Statistics Theory},
  year={2019}
}
We study the asymptotic consistency properties of $\alpha$-R\'enyi approximate posteriors, a class of variational Bayesian methods that approximate an intractable Bayesian posterior with a member of a tractable family of distributions, the member chosen to minimize the $\alpha$-R\'enyi divergence from the true posterior. Unique to our work is that we consider settings with $\alpha > 1$, resulting in approximations that upperbound the log-likelihood, and consequently have wider spread than… 

Figures and Tables from this paper

Frequentist Consistency of Generalized Variational Inference

This paper shows that under minimal regularity conditions, the sequence of GVI posteriors is consistent and collapses to a point mass at the population-optimal parameter value as the number of observations goes to infinity.

Computational Bayes-Predictive Stochastic Programming: Finite Sample Bounds

This work studies computational approaches to decision-making based on the modern optimization-based methodology of variational Bayes, and considers two approaches, a two-stage approach where a posterior approximation is constructed and then used to solve the decision problem, and an approach that jointly solves the optimization and decision problems.

Risk-Sensitive Variational Bayes: Formulations and Bounds

The key methodological innovation in this paper is to leverage a dual representation of the risk measure to introduce an optimization-based framework for approximately computing the posterior risk-sensitive objective, as opposed to using standard sampling based methods such as Markov Chain Monte Carlo.

Asymptotic consistency of loss‐calibrated variational Bayes

This paper establishes the asymptotic consistency of the loss‐calibrated variational Bayes (LCVB) method. LCVB is a method for approximately computing Bayesian posterior approximations in a “loss

Convergence Rates of Variational Inference in Sparse Deep Learning

This paper shows that variational inference for sparse deep learning retains the same generalization properties than exact Bayesian inference and highlights the connection between estimation and approximation theories via the classical bias-variance trade-off.

A Generalization Bound for Online Variational Inference

It is shown that this is indeed the case for some variational inference (VI) algorithms, and theoretical justifications in favor of online algorithms relying on approximate Bayesian methods are presented.

$\alpha $-variational inference with statistical guarantees

A family of variational approximations to Bayesian posterior distributions, called $\alpha-VB, with provable statistical guarantees, are proposed, implying that point estimates constructed from the $\alpha$-VB procedure converge at an optimal rate to the true parameter in a wide range of problems.

Frequentist Consistency of Variational Bayes

  • Yixin WangD. Blei
  • Mathematics, Computer Science
    Journal of the American Statistical Association
  • 2018
It is proved that the VB posterior converges to the Kullback–Leibler (KL) minimizer of a normal distribution, centered at the truth and the corresponding variational expectation of the parameter is consistent and asymptotically normal.

Variational Inference via \chi Upper Bound Minimization

CHIVI is proposed, a black-box variational inference algorithm that minimizes the $\chi$-divergence from p to q and minimizes an upper bound of the model evidence, which is referred to as the $chi$ upper bound (CUBO).

Variational Inference: A Review for Statisticians

Variational inference (VI), a method from machine learning that approximates probability densities through optimization, is reviewed and a variant that uses stochastic optimization to scale up to massive data is derived.

Convergence rates of variational posterior distributions

We study convergence rates of variational posterior distributions for nonparametric and high-dimensional inference. We formulate general conditions on prior, likelihood, and variational class that

On Bayes Procedures

In this chapter we describe some of the asymptotic properties of Bayes procedures. These are obtained by using on the parameter set Θ a finite positive measure μ and minimizing the average risk

The Bernstein-Von-Mises theorem under misspecification

We prove that the posterior distribution of a parameter in misspecified LAN parametric models can be approximated by a random normal distribution. We derive from this that Bayesian credible sets are

Variational Inference via χ Upper Bound Minimization

CHIVI is proposed, a black-box variational inference algorithm that minimizes Dχ(p||q), the χ-divergence from p to q, which leads to improved posterior uncertainty and can be used with the classical VI lower bound (ELBO) to provide a sandwich estimate of the model evidence.

The semiparametric Bernstein-von Mises theorem

In a smooth semiparametric estimation problem, the marginal posterior for the parameter of interest is expected to be asymptotically normal and satisfy frequentist criteria of optimality if the model

Rényi Divergence Variational Inference

The variational R\'enyi bound (VR) is introduced that extends traditional variational inference to R‐enyi's alpha-divergences, and a novel variational inferred method is proposed as a new special case in the proposed framework.