Corpus ID: 226975633

Recursive Inference for Variational Autoencoders

@article{Kim2020RecursiveIF,
  title={Recursive Inference for Variational Autoencoders},
  author={Minyoung Kim and V. Pavlovic},
  journal={ArXiv},
  year={2020},
  volume={abs/2011.08544}
}
Inference networks of traditional Variational Autoencoders (VAEs) are typically amortized, resulting in relatively inaccurate posterior approximation compared to instance-wise variational optimization. Recent semi-amortized approaches were proposed to address this drawback; however, their iterative gradient update procedures can be computationally demanding. To address these issues, in this paper we introduce an accurate amortized inference algorithm. We propose a novel recursive mixture… Expand
Reducing the Amortization Gap in Variational Autoencoders: A Bayesian Random Function Approach
TLDR
This paper considers a random inference model, where the mean and variance functions of the variational posterior as random Gaussian processes (GP) so that the deviation of the VAE’s amortized posterior distribution from the true posterior can be regarded as random noise, which allows us to take into account the uncertainty in posterior approximation in a principled manner. Expand

References

SHOWING 1-10 OF 51 REFERENCES
Variational Laplace Autoencoders
TLDR
A general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models based on the Laplace approximation of the latent variable posterior, VLAEs enhance the expressiveness of the posterior while reducing the amortization error. Expand
Semi-Amortized Variational Autoencoders
TLDR
This work proposes a hybrid approach, to use AVI to initialize the variational parameters and run stochastic variational inference (SVI) to refine them, which enables the use of rich generative models without experiencing the posterior-collapse phenomenon common in training VAEs for problems like text generation. Expand
Inference Suboptimality in Variational Autoencoders
TLDR
It is found that divergence from the true posterior is often due to imperfect recognition networks, rather than the limited complexity of the approximating distribution, and the parameters used to increase the expressiveness of the approximation play a role in generalizing inference. Expand
Iterative Amortized Inference
TLDR
This work proposes iterative inference models, which learn to perform inference optimization through repeatedly encoding gradients, and demonstrates the inference optimization capabilities of these models and shows that they outperform standard inference models on several benchmark data sets of images and text. Expand
Importance Weighted Autoencoders
TLDR
The importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting, shows empirically that IWAEs learn richer latent space representations than VAEs, leading to improved test log- likelihood on density estimation benchmarks. Expand
Boosting Variational Inference
TLDR
Boosting variational inference is developed, an algorithm that iteratively improves the current approximation by mixing it with a new component from the base distribution family and thereby yields progressively more accurate posterior approximations as more computing time is spent. Expand
Boosting Variational Inference: an Optimization Perspective
TLDR
This work studies the convergence properties of boosting variational inference from a modern optimization viewpoint by establishing connections to the classic Frank-Wolfe algorithm and yields novel theoretical insights regarding the sufficient conditions for convergence, explicit rates, and algorithmic simplifications. Expand
VAE with a VampPrior
TLDR
This paper proposes to extend the variational auto-encoder (VAE) framework with a new type of prior called "Variational Mixture of Posteriors" prior, or VampPrior for short, which consists of a mixture distribution with components given by variational posteriors conditioned on learnable pseudo-inputs. Expand
Boosting Black Box Variational Inference
TLDR
This work shows that boosting VI satisfies a relaxed smoothness assumption which is sufficient for the convergence of the functional Frank-Wolfe (FW) algorithm, and proposes to maximize the Residual ELBO (RELBO) which replaces the standard ELBO optimization in VI. Expand
Universal Boosting Variational Inference
Boosting variational inference (BVI) approximates an intractable probability density by iteratively building up a mixture of simple component distributions one at a time, using techniques from sparseExpand
...
1
2
3
4
5
...