Variational Autoencoder with Implicit Optimal Priors

@inproceedings{Takahashi2018VariationalAW,
  title={Variational Autoencoder with Implicit Optimal Priors},
  author={Hiroshi Takahashi and Tomoharu Iwata and Yuki Yamanaka and Masanori Yamada and Satoshi Yagi},
  booktitle={AAAI Conference on Artificial Intelligence},
  year={2018}
}
The variational autoencoder (VAE) is a powerful generative model that can estimate the probability of a data point by using latent variables. In the VAE, the posterior of the latent variable given the data point is regularized by the prior of the latent variable using Kullback Leibler (KL) divergence. Although the standard Gaussian distribution is usually used for the prior, this simple prior incurs over-regularization. As a sophisticated prior, the aggregated posterior has been introduced… 

Figures and Tables from this paper

A Contrastive Learning Approach for Training Variational Autoencoder Priors

Noise contrastive priors are proposed that improve the generative performance of state-of-the-art VAEs by a large margin on the MNIST, CIFAR-10, CelebA 64, and CelebA HQ 256 datasets.

Learning Optimal Priors for Task-Invariant Representations in Variational Autoencoders

This study theoretically investigates why the CVAE cannot sufficiently reduce the task-dependency and shows that the simple standard Gaussian prior is one of the causes and proposes a theoretical optimal prior for reducing the task -dependency.

On the Necessity and Effectiveness of Learning the Prior of Variational Auto-Encoder

This paper proves the necessity and effectiveness of learning the prior when aggregated posterior does not match unit Gaussian prior, analyzes why this situation may happen, and proposes a hypothesis that learning thePrior may improve reconstruction loss, all of which are supported by the extensive experiment results.

NCP-VAE: Variational Autoencoders with Noise Contrastive Priors

Noise contrastive priors are proposed that improve the generative performance of state-of-the-art VAEs by a large margin on the MNIST, CIFAR-10, CelebA 64, and CelebA HQ 256 datasets.

Approximate Inference in Variational Autoencoders

A problem arising from the mismatch between the posteriors of each modality is identified and it is demonstrated how the problem can be largely addressed by modelling the aggregate of the image posteriors.

On the Importance of Learning Aggregate Posteriors in Multimodal Variational Autoencoders

The importance of learning aggregate posteriors when faced with these types of distribution mismatches is highlighted, which is demonstrated on modified versions of the CLEVR and CelebA datasets.

Prior latent distribution comparison for the RNN Variational Autoencoder in low-resource language modeling

The experiment shows that there is a statistical difference between the different priors in the encoder-decoder architecture and it is showed that family distribution hyperparameter is important in the Low-Resource Language Modeling task and should be considered for the model training.

VAEPP: Variational Autoencoder with a Pull-Back Prior

A novel learnable prior is proposed for VAEs by adjusting the density of the prior through a discriminator that can assess the quality of data, and involves the discriminator from the theory of GANs to enrich the prior in VAEs.

To Regularize or Not To Regularize? The Bias Variance Trade-off in Regularized AEs

It is shown that there is no single fixed prior which is optimal for all data distributions, given a Gaussian Decoder, and there exists a bias-variance trade-off that comes with prior imposition, so a generalized ELBO objective is optimized, with an additional state space over the latent prior.

Encoded Prior Sliced Wasserstein AutoEncoder for learning latent manifold representations

This work introduces an Encoded Prior Sliced Wasserstein AutoEncoder (EPSWAE), wherein an additional prior-encoder network learns an unconstrained prior to match the encoded data manifold, and applies it to 3D-spiral, MNIST, and CelebA datasets, showing that its latent representations and interpolations are comparable to the state of the art on equivalent architectures.

References

SHOWING 1-10 OF 44 REFERENCES

Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks

Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihood-problem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs).

VAE with a VampPrior

This paper proposes to extend the variational auto-encoder (VAE) framework with a new type of prior called "Variational Mixture of Posteriors" prior, or VampPrior for short, which consists of a mixture distribution with components given by variational posteriors conditioned on learnable pseudo-inputs.

Hyperspherical Variational Auto-Encoders

This work proposes using a von Mises-Fisher distribution instead of a Gaussian distribution for both the prior and posterior of the Variational Auto-Encoder, leading to a hyperspherical latent space.

Improving Variational Auto-Encoders using Householder Flow Improving Variational Auto-Encoders using Householder Flow

This paper proposes a volume-preserving VAE that uses a series of Householder transformations and shows empirically on MNIST dataset and histopathology data that the proposedow allows to obtain more flexible variational posterior and highly competitive results comparing to other normalizingflows.

Deep Unsupervised Clustering with Gaussian Mixture Variational Autoencoders

It is shown that a heuristic called minimum information constraint that has been shown to mitigate this effect in VAEs can also be applied to improve unsupervised clustering performance with this variant of the variational autoencoder model with a Gaussian mixture as a prior distribution.

Distribution Matching in Variational Inference

It is concluded that at present, VAE-GAN hybrids have limited applicability: they are harder to scale, evaluate, and use for inference compared to VAEs; and they do not improve over the generation quality of GANs.

Variational Inference with Normalizing Flows

It is demonstrated that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.

Adam: A Method for Stochastic Optimization

This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.

Wasserstein Auto-Encoders

The Wasserstein Auto-Encoder (WAE) is proposed---a new algorithm for building a generative model of the data distribution that shares many of the properties of VAEs (stable training, encoder-decoder architecture, nice latent manifold structure) while generating samples of better quality, as measured by the FID score.

Generative Adversarial Nets

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a