• Corpus ID: 238856835

The Neglected Sibling: Isotropic Gaussian Posterior for VAE

  title={The Neglected Sibling: Isotropic Gaussian Posterior for VAE},
  author={Lan Zhang and Wray L. Buntine and Ehsan Shareghi},
Deep generative models have been widely used in several areas of NLP, and various techniques have been proposed to augment them or address their training challenges. In this paper, we propose a simple modification to Variational Autoencoders (VAEs) by using an Isotropic Gaussian Posterior (IGP) that allows for better utilisation of their latent representation space. This model avoids the sub-optimal behavior of VAEs related to inactive dimensions in the representation space . We provide both… 


A Surprisingly Effective Fix for Deep Latent Variable Modeling of Text
A simple fix for posterior collapse is investigated which yields surprisingly effective results and is used to argue that the typical surrogate objective for VAEs may not be sufficient or necessarily appropriate for balancing the goals of representation learning and data distribution modeling.
Fixing a Broken ELBO
This framework derives variational lower and upper bounds on the mutual information between the input and the latent variable, and uses these bounds to derive a rate-distortion curve that characterizes the tradeoff between compression and reconstruction accuracy.
On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation
This work imposes an explicit constraint on the Kullback-Leibler (KL) divergence term inside the VAE objective function, and explores different properties of the estimated posterior distribution, and highlights the trade-off between the amount of information encoded in a latent code during training, and the generative capacity of the model.
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial
Importance Weighted Autoencoders
The importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting, shows empirically that IWAEs learn richer latent space representations than VAEs, leading to improved test log- likelihood on density estimation benchmarks.
Effective Estimation of Deep Generative Language Models
A sober view of the problem, a survey of techniques to address it, novel techniques, and extensions to the model are surveyed, and a systematic comparison using Bayesian optimisation finds that many techniques perform reasonably similar, given enough resources.
Adversarially Regularized Autoencoders
This work proposes a flexible method for training deep latent variable models of discrete structures based on the recently-proposed Wasserstein autoencoder (WAE), and shows that the latent representation can be trained to perform unaligned textual style transfer, giving improvements both in automatic/human evaluation compared to existing methods.
Preventing Posterior Collapse with delta-VAEs
This paper proposes an alternative that utilizes the most powerful generative models as decoders, whilst optimising the variational lower bound all while ensuring that the latent variables preserve and encode useful information.
Understanding disentangling in β-VAE
A modification to the training regime of β-VAE is proposed, that progressively increases the information capacity of the latent code during training, to facilitate the robust learning of disentangled representations in β- VAE, without the previous trade-off in reconstruction accuracy.
Improved Variational Autoencoders for Text Modeling using Dilated Convolutions
It is shown that with the right decoder, VAE can outperform LSTM language models, and perplexity gains are demonstrated on two datasets, representing the first positive experimental result on the use VAE for generative modeling of text.