Gibbs sampler and coordinate ascent variational inference: A set-theoretical review

@article{Lee2020GibbsSA,
  title={Gibbs sampler and coordinate ascent variational inference: A set-theoretical review},
  author={Se Yoon Lee},
  journal={arXiv: Statistics Theory},
  year={2020}
}
  • Se Yoon Lee
  • Published 2020
  • Mathematics
  • arXiv: Statistics Theory
A central task in Bayesian machine learning is the approximation of the posterior distribution. Gibbs sampler and coordinate ascent variational inference are renownedly utilized approximation techniques that rely on stochastic and deterministic approximations. This article clarifies that the two schemes can be explained more generally in a set-theoretical point of view. The alternative views are consequences of a duality formula for variational inference. 
1 Citations

Figures from this paper

Improving MC-Dropout Uncertainty Estimates with Calibration Error-based Optimization
TLDR
This study proposes two new loss functions by combining cross entropy with Expected Calibration Error (ECE) and Predictive Entropy (PE) and shows that the new proposed loss functions lead to having a calibrated MC-Dropout method. Expand

References

SHOWING 1-10 OF 71 REFERENCES
Concentration inequalities and model selection, volume
  • 2007
Probability and Measure
Probability. Measure. Integration. Random Variables and Expected Values. Convergence of Distributions. Derivatives and Conditional Probability. Stochastic Processes. Appendix. Notes on the Problems.Expand
Advances in Variational Inference
TLDR
An overview of recent trends in variational inference is given and a summary of promising future research directions is provided. Expand
Yes, but Did It Work?: Evaluating Variational Inference
TLDR
Two diagnostic algorithms are proposed that give a goodness of fit measurement for joint distributions, while simultaneously improving the error in the estimate. Expand
Frequentist Consistency of Variational Bayes
TLDR
It is proved that the VB posterior converges to the Kullback–Leibler (KL) minimizer of a normal distribution, centered at the truth and the corresponding variational expectation of the parameter is consistent and asymptotically normal. Expand
Theoretical and Computational Guarantees of Mean Field Variational Inference for Community Detection
TLDR
The mean field method for community detection under the Stochastic Block Model has a linear convergence rate and converges to the minimax rate within $\log n$ iterations and similar optimality results for Gibbs sampling and an iterative procedure to calculate maximum likelihood estimation are obtained, which can be of independent interest. Expand
  • 2016
An overview of gradient descent optimization algorithms
TLDR
This article looks at different variants of gradient descent, summarize challenges, introduce the most common optimization algorithms, review architectures in a parallel and distributed setting, and investigate additional strategies for optimizing gradient descent. Expand
Uncertainty in Deep Learning
TLDR
This work develops tools to obtain practical uncertainty estimates in deep learning, casting recent deep learning tools as Bayesian models without changing either the models or the optimisation, and develops the theory for such tools. Expand
Variational Inference: A Review for Statisticians
TLDR
Variational inference (VI), a method from machine learning that approximates probability densities through optimization, is reviewed and a variant that uses stochastic optimization to scale up to massive data is derived. Expand
...
1
2
3
4
5
...