Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness

@article{Shen2021RegularizingVA,
  title={Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness},
  author={Dazhong Shen and Chuan Qin and Chao Wang and Hengshu Zhu and Enhong Chen and Hui Xiong},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.12381}
}
As one of the most popular generative models, Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference. However, when the decoder network is sufficiently expressive, VAE may lead to posterior collapse; that is, uninformative latent representations may be learned. To this end, in this paper, we propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space, and thus the representation can be learned… 

Figures and Tables from this paper

Topic Modeling Revisited: A Document Graph-based Neural Network Perspective

This paper revisits the task of topic modeling by transforming each document into a directed graph with word dependency as edges between word nodes, and develops a novel approach, namely Graph Neural Topic Model (GNTM).

References

SHOWING 1-10 OF 42 REFERENCES

A Batch Normalized Inference Network Keeps the KL Vanishing Away

Batch Normalized-VAe (BN-VAE), a simple but effective approach to set a lower bound of the expectation by regularizing the distribution of the approximate posterior’s parameters, surpasses strong autoregressive baselines on language modeling, text classification and dialogue generation, and rivals more complex approaches while keeping almost the same training time as VAE.

Lagging Inference Networks and Posterior Collapse in Variational Autoencoders

This paper investigates posterior collapse from the perspective of training dynamics and proposes an extremely simple modification to VAE training to reduce inference lag: depending on the model's current mutual information between latent variable and observation, the inference network is optimized before performing each model update.

InfoVAE: Balancing Learning and Inference in Variational Autoencoders

It is shown that the proposed Info-VAE model can significantly improve the quality of the variational posterior and can make effective use of the latent features regardless of the flexibility of the decoding distribution.

Variational Lossy Autoencoder

This paper presents a simple but principled method to learn global representations by combining Variational Autoencoder (VAE) with neural autoregressive models such as RNN, MADE and PixelRNN/CNN with greatly improve generative modeling performance of VAEs.

Preventing Posterior Collapse with delta-VAEs

This paper proposes an alternative that utilizes the most powerful generative models as decoders, whilst optimising the variational lower bound all while ensuring that the latent variables preserve and encode useful information.

beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework

Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial

Cyclical Annealing Schedule: A Simple Approach to Mitigating KL Vanishing

A cyclical annealing schedule is proposed, which simply repeats the process of increasing \beta multiple times, and allows to learn more meaningful latent codes progressively by leveraging the results of previous learning cycles as warm re-restart.

MADE: Masked Autoencoder for Distribution Estimation

This work introduces a simple modification for autoencoder neural networks that yields powerful generative models and proves that this approach is competitive with state-of-the-art tractable distribution estimators.

Implicit Deep Latent Variable Models for Text Generation

An LVM to directly match the aggregated posterior to the prior is developed, which can be viewed as a natural extension of VAEs with a regularization of maximizing mutual information, mitigating the “posterior collapse” issue.

A Hybrid Convolutional Variational Autoencoder for Text Generation

A novel hybrid architecture that blends fully feed-forward convolutional and deconvolutional components with a recurrent language model is proposed that helps to avoid the issue of the VAE collapsing to a deterministic model.