Corpus ID: 76666188

Diagnosing and Enhancing VAE Models

@article{Dai2019DiagnosingAE,
  title={Diagnosing and Enhancing VAE Models},
  author={Bin Dai and David P. Wipf},
  journal={ArXiv},
  year={2019},
  volume={abs/1903.05789}
}
Although variational autoencoders (VAEs) represent a widely influential deep generative model, many aspects of the underlying energy function remain poorly understood. [...] Key Method We then leverage the corresponding insights to develop a simple VAE enhancement that requires no additional hyperparameters or sensitive tuning. Quantitatively, this proposal produces crisp samples and stable FID scores that are actually competitive with a variety of GAN models, all while retaining desirable attributes of the…Expand
Models Diagnosing and Enhancing VAE Models
Although variational autoencoders (VAEs) represent a widely influential deep generative model, many aspects of the underlying energy function remain poorly understood. In particular, it is commonlyExpand
Variational Autoencoders Pursue PCA Directions (by Accident)
TLDR
The diagonal approximation in the encoder together with the inherent stochasticity force local orthogonality of the decoder and this statement is justified with full theoretical analysis as well as with experiments. Expand
Neighbor Embedding Variational Autoencoder
TLDR
NE-VAE can prevent posterior collapse to a much greater extent than it’s predecessors, and can be easily plugged into any autoencoder framework, without introducing addition model components and complex training routines. Expand
From Variational to Deterministic Autoencoders
TLDR
It is shown, in a rigorous empirical study, that the proposed regularized deterministic autoencoders are able to generate samples that are comparable to, or better than, those of VAEs and more powerful alternatives when applied to images as well as to structured data such as molecules. Expand
NCP-VAE: Variational Autoencoders with Noise Contrastive Priors
TLDR
Noise contrastive priors are proposed that improve the generative performance of state-of-the-art VAEs by a large margin on the MNIST, CIFAR-10, CelebA 64, and CelebA HQ 256 datasets. Expand
VAE APPROXIMATION ERROR: ELBO AND EXPONENTIAL FAMILIES
The importance of Variational Autoencoders reaches far beyond standalone generative models — the approach is also used for learning latent representations and can be generalized to semi-supervisedExpand
Jigsaw-VAE: Towards Balancing Features in Variational Autoencoders
TLDR
A regularization scheme for VAEs is proposed, which is shown to substantially addresses the feature imbalance problem and introduces a simple metric to measure the balance of features in generated images. Expand
Regularized Autoencoders via Relaxed Injective Probability Flow
TLDR
A generative model based on probability flows that does away with the bijectivity requirement on the model and only assumes injectivity is proposed, which provides another perspective on regularized autoencoders (RAEs), with the final objectives resembling RAEs with specific regularizers that are derived by lower bounding the probability flow objective. Expand
Preventing Posterior Collapse Induced by Oversmoothing in Gaussian VAE
TLDR
This work proposes AR-ELBO (Adaptively Regularized Evidence Lower BOund), which controls the smoothness of the model by adapting this variance parameter and extends VAE with alternative parameterizations on the variance parameter to deal with non-uniform or conditional data variance. Expand
Hidden Talents of the Variational Autoencoder.
TLDR
It is demonstrated that the VAE can be viewed as the natural evolution of recent robust PCA models, capable of learning nonlinear manifolds of unknown dimension obscured by gross corruptions. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 42 REFERENCES
VAE with a VampPrior
TLDR
This paper proposes to extend the variational auto-encoder (VAE) framework with a new type of prior called "Variational Mixture of Posteriors" prior, or VampPrior for short, which consists of a mixture distribution with components given by variational posteriors conditioned on learnable pseudo-inputs. Expand
Connections with Robust PCA and the Role of Emergent Sparsity in Variational Autoencoder Models
TLDR
It is demonstrated that the VAE can be viewed as the natural evolution of recent robust PCA models, capable of learning nonlinear manifolds of unknown dimension obscured by gross corruptions. Expand
Importance Weighted Autoencoders
TLDR
The importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting, shows empirically that IWAEs learn richer latent space representations than VAEs, leading to improved test log- likelihood on density estimation benchmarks. Expand
Variational Lossy Autoencoder
TLDR
This paper presents a simple but principled method to learn global representations by combining Variational Autoencoder (VAE) with neural autoregressive models such as RNN, MADE and PixelRNN/CNN with greatly improve generative modeling performance of VAEs. Expand
Improved Training of Wasserstein GANs
TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning. Expand
Are GANs Created Equal? A Large-Scale Study
TLDR
A neutral, multi-faceted large-scale empirical study on state-of-the art models and evaluation measures finds that most models can reach similar scores with enough hyperparameter optimization and random restarts, suggesting that improvements can arise from a higher computational budget and tuning more than fundamental algorithmic changes. Expand
Adversarially Regularized Autoencoders
TLDR
This work proposes a flexible method for training deep latent variable models of discrete structures based on the recently-proposed Wasserstein autoencoder (WAE), and shows that the latent representation can be trained to perform unaligned textual style transfer, giving improvements both in automatic/human evaluation compared to existing methods. Expand
Hyperspherical Variational Auto-Encoders
TLDR
This work proposes using a von Mises-Fisher distribution instead of a Gaussian distribution for both the prior and posterior of the Variational Auto-Encoder, leading to a hyperspherical latent space. Expand
Isolating Sources of Disentanglement in Variational Autoencoders
We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate our $\beta$-TCVAE (Total CorrelationExpand
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificialExpand
...
1
2
3
4
5
...