Latent Variable Modelling Using Variational Autoencoders: A survey

@article{Kalingeri2022LatentVM,
  title={Latent Variable Modelling Using Variational Autoencoders: A survey},
  author={Vasanth Kalingeri},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.09891}
}
A probability distribution allows practitioners to uncover hidden structure in the data and build models to solve supervised learning problems using limited data. The focus of this report is on Variational autoencoders, a method to learn the probability distribution of large complex datasets. The report provides a theoretical understanding of variational autoencoders and consolidates the current research in the field. The report is divided into multiple chapters, the first chapter introduces… 

Figures from this paper

References

SHOWING 1-10 OF 50 REFERENCES

InfoVAE: Information Maximizing Variational Autoencoders

TLDR
It is shown that this model can significantly improve the quality of the variational posterior and can make effective use of the latent features regardless of the flexibility of the decoding distribution, and it is demonstrated that the models outperform competing approaches on multiple performance metrics.

Variational Inference for Monte Carlo Objectives

TLDR
The first unbiased gradient estimator designed for importance-sampled objectives is developed, which is both simpler and more effective than the NVIL estimator proposed for the single-sample variational objective, and is competitive with the currently used biases.

Importance Weighted Autoencoders

TLDR
The importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting, shows empirically that IWAEs learn richer latent space representations than VAEs, leading to improved test log- likelihood on density estimation benchmarks.

Ladder Variational Autoencoders

TLDR
A new inference model is proposed, the Ladder Variational Autoencoder, that recursively corrects the generative distribution by a data dependent approximate likelihood in a process resembling the recently proposed Ladder Network.

Variational Lossy Autoencoder

TLDR
This paper presents a simple but principled method to learn global representations by combining Variational Autoencoder (VAE) with neural autoregressive models such as RNN, MADE and PixelRNN/CNN with greatly improve generative modeling performance of VAEs.

Generalized Denoising Auto-Encoders as Generative Models

TLDR
A different attack on the problem is proposed, which deals with arbitrary (but noisy enough) corruption, arbitrary reconstruction loss, handling both discrete and continuous-valued variables, and removing the bias due to non-infinitesimal corruption noise.

Neural Discrete Representation Learning

TLDR
Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.

Variational Gaussian Process

TLDR
The variational Gaussian process is constructed, a Bayesian nonparametric model which adapts its shape to match complex posterior distributions, and is proved a universal approximation theorem for the VGP, demonstrating its representative power for learning any model.

Generative Adversarial Nets

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a

Improving Variational Autoencoders with Inverse Autoregressive Flow

TLDR
In experiments with natural images, it is demonstrated that autoregressive flow leads to significant performance gains and is well applicable to models with high-dimensional latent spaces, such as convolutional generative models.