• Corpus ID: 233236726

Meta-Learning Divergences for Variational Inference

  title={Meta-Learning Divergences for Variational Inference},
  author={Ruqi Zhang and Yingzhen Li and Christopher De Sa and Sam Devlin and Cheng Zhang},
Variational inference (VI) plays an essential role in approximate Bayesian inference due to its computational efficiency and broad applicability. Crucial to the performance of VI is the selection of the associated divergence measure, as VI approximates the intractable distribution by minimizing this divergence. In this paper we propose a meta-learning algorithm to learn the divergence metric suited for the task of interest, automating the design of VI methods. In addition, we learn the… 
Robust PACm: Training Ensemble Models Under Model Misspecification and Outliers
This work presents a novel robust free energy criterion that combines the generalized logarithm score function with PAC m ensemble bounds and produces predictive distributions that are able to concurrently counteract the detrimental effects of model misspecification and outliers.
Mixture weights optimisation for Alpha-Divergence Variational Inference
The link between Power Descent and Entropic Mirror Descent is investigated and first-order approximations allow us to introduce the Rényi Descent, a novel algorithm for which the authors prove an O (1 /N ) convergence rate.


Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning
Rényi Divergence Variational Inference
The variational R\'enyi bound (VR) is introduced that extends traditional variational inference to R‐enyi's alpha-divergences, and a novel variational inferred method is proposed as a new special case in the proposed framework.
Meta-Learning MCMC Proposals
A meta-learning approach to building effective and generalizable MCMC proposals that generalize to occurrences of common structural motifs across different models, allowing for the construction of a library of learned inference primitives that can accelerate inference on unseen models with no model-specific training required.
Variational Inference with Tail-adaptive f-Divergence
A new class of tail-adaptive f-divergences that adaptively change the convex function f with the tail of the importance weights, in a way that theoretically guarantee finite moments, while simultaneously achieving mass-covering properties is proposed.
Auto-Encoding Variational Bayes
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Stochastic Expectation Propagation
Stochastic expectation propagation is presented, called SEP, that maintains a global posterior approximation but updates it in a local way (like EP), and is ideally suited to performing approximate Bayesian learning in the large model, large dataset setting.
Differential-geometrical methods in statistics
Bayesian Model-Agnostic Meta-Learning
The proposed method combines scalable gradient-based meta-learning with nonparametric variational inference in a principled probabilistic framework and is capable of learning complex uncertainty structure beyond a point estimate or a simple Gaussian approximation during fast adaptation.
Possible generalization of Boltzmann-Gibbs statistics
With the use of a quantity normally scaled in multifractals, a generalized form is postulated for entropy, namelySq ≡k [1 – ∑i=1W piq]/(q-1), whereq∈ℝ characterizes the generalization andpi are the
Differential-geometrical methods in statistics, volume 28
  • Springer Science & Business Media,
  • 2012