Variational Autoencoder with Implicit Optimal Priors

@inproceedings{Takahashi2018VariationalAW,
  title={Variational Autoencoder with Implicit Optimal Priors},
  author={Hiroshi Takahashi and Tomoharu Iwata and Yuki Yamanaka and Masanori Yamada and Satoshi Yagi},
  booktitle={AAAI Conference on Artificial Intelligence},
  year={2018}
}
The variational autoencoder (VAE) is a powerful generative model that can estimate the probability of a data point by using latent variables. In the VAE, the posterior of the latent variable given the data point is regularized by the prior of the latent variable using Kullback Leibler (KL) divergence. Although the standard Gaussian distribution is usually used for the prior, this simple prior incurs over-regularization. As a sophisticated prior, the aggregated posterior has been introduced… 

Figures and Tables from this paper

A Contrastive Learning Approach for Training Variational Autoencoder Priors

Noise contrastive priors are proposed that improve the generative performance of state-of-the-art VAEs by a large margin on the MNIST, CIFAR-10, CelebA 64, and CelebA HQ 256 datasets.

Learning Optimal Priors for Task-Invariant Representations in Variational Autoencoders

This study theoretically investigates why the CVAE cannot sufficiently reduce the task-dependency and shows that the simple standard Gaussian prior is one of the causes and proposes a theoretical optimal prior for reducing the task -dependency.

On the Necessity and Effectiveness of Learning the Prior of Variational Auto-Encoder

This paper proves the necessity and effectiveness of learning the prior when aggregated posterior does not match unit Gaussian prior, analyzes why this situation may happen, and proposes a hypothesis that learning thePrior may improve reconstruction loss, all of which are supported by the extensive experiment results.

NCP-VAE: Variational Autoencoders with Noise Contrastive Priors

Noise contrastive priors are proposed that improve the generative performance of state-of-the-art VAEs by a large margin on the MNIST, CIFAR-10, CelebA 64, and CelebA HQ 256 datasets.

Approximate Inference in Variational Autoencoders

A problem arising from the mismatch between the posteriors of each modality is identified and it is demonstrated how the problem can be largely addressed by modelling the aggregate of the image posteriors.

Data-Dependent Conditional Priors for Unsupervised Learning of Multimodal Data †

This paper proposes a novel formulation of variational autoencoders, conditional prior VAE (CP-VAE), with a two-level generative process for the observed data where continuous z and a discrete c variables are introduced in addition to the observed variables x.

Prior latent distribution comparison for the RNN Variational Autoencoder in low-resource language modeling

The experiment shows that there is a statistical difference between the different priors in the encoder-decoder architecture and it is showed that family distribution hyperparameter is important in the Low-Resource Language Modeling task and should be considered for the model training.

VAEPP: Variational Autoencoder with a Pull-Back Prior

A novel learnable prior is proposed for VAEs by adjusting the density of the prior through a discriminator that can assess the quality of data, and involves the discriminator from the theory of GANs to enrich the prior in VAEs.

To Regularize or Not To Regularize? The Bias Variance Trade-off in Regularized AEs

It is shown that there is no single fixed prior which is optimal for all data distributions, given a Gaussian Decoder, and there exists a bias-variance trade-off that comes with prior imposition, so a generalized ELBO objective is optimized, with an additional state space over the latent prior.

Encoded Prior Sliced Wasserstein AutoEncoder for learning latent manifold representations

This work introduces an Encoded Prior Sliced Wasserstein AutoEncoder (EPSWAE), wherein an additional prior-encoder network learns an unconstrained prior to match the encoded data manifold, and applies it to 3D-spiral, MNIST, and CelebA datasets, showing that its latent representations and interpolations are comparable to the state of the art on equivalent architectures.

References

SHOWING 1-10 OF 44 REFERENCES

Importance Weighted Autoencoders

The importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting, shows empirically that IWAEs learn richer latent space representations than VAEs, leading to improved test log- likelihood on density estimation benchmarks.

Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks

Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihood-problem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs).

VAE with a VampPrior

This paper proposes to extend the variational auto-encoder (VAE) framework with a new type of prior called "Variational Mixture of Posteriors" prior, or VampPrior for short, which consists of a mixture distribution with components given by variational posteriors conditioned on learnable pseudo-inputs.

Hyperspherical Variational Auto-Encoders

This work proposes using a von Mises-Fisher distribution instead of a Gaussian distribution for both the prior and posterior of the Variational Auto-Encoder, leading to a hyperspherical latent space.

Auto-Encoding Variational Bayes

A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.

Improving Variational Auto-Encoders using Householder Flow Improving Variational Auto-Encoders using Householder Flow

This paper proposes a volume-preserving VAE that uses a series of Householder transformations and shows empirically on MNIST dataset and histopathology data that the proposedow allows to obtain more flexible variational posterior and highly competitive results comparing to other normalizingflows.

Deep Unsupervised Clustering with Gaussian Mixture Variational Autoencoders

It is shown that a heuristic called minimum information constraint that has been shown to mitigate this effect in VAEs can also be applied to improve unsupervised clustering performance with this variant of the variational autoencoder model with a Gaussian mixture as a prior distribution.

Adversarial Autoencoders

This paper shows how the adversarial autoencoder can be used in applications such as semi-supervised classification, disentangling style and content of images, unsupervised clustering, dimensionality reduction and data visualization, and performed experiments on MNIST, Street View House Numbers and Toronto Face datasets.

Distribution Matching in Variational Inference

It is concluded that at present, VAE-GAN hybrids have limited applicability: they are harder to scale, evaluate, and use for inference compared to VAEs; and they do not improve over the generation quality of GANs.

Neural Discrete Representation Learning

Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.