• Corpus ID: 11663659

Discrete Variational Autoencoders

@article{Rolfe2017DiscreteVA,
  title={Discrete Variational Autoencoders},
  author={Jason Tyler Rolfe},
  journal={ArXiv},
  year={2017},
  volume={abs/1609.02200}
}
  • J. Rolfe
  • Published 7 September 2016
  • Computer Science
  • ArXiv
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises… 
DVAE++: Discrete Variational Autoencoders with Overlapping Transformations
TLDR
DVAE++ is developed, a generative model with a global discrete prior and a hierarchy of convolutional continuous variables, and a new variational bound to efficiently train with Boltzmann machine priors is derived.
Continuous Relaxation Training of Discrete Latent Variable Image Models
TLDR
This work evaluates several approaches to training large-scale image models on CIFAR-10 using a probabilistic variant of the recently proposed Vector Quantized VAE architecture and finds that biased estimators such as continuous relaxations provide reliable methods for training these models while unbiased score-function-based estimators like VIMCO struggle in high-dimensional discrete spaces.
A RAD approach to deep mixture models
TLDR
This Real and Discrete (RAD) approach retains the desirable normalizing flow properties of exact sampling, exact inference, and analytically computable probabilities, while at the same time allowing simultaneous modeling of both continuous and discrete structure in a data distribution.
Variational Sparse Coding
TLDR
A model based on variational auto-encoders in which interpretation is induced through latent space sparsity with a mixture of Spike and Slab distributions as prior is proposed and provides unique capabilities, such as recovering feature exploitation, synthesising samples that share attributes with a given input object and controlling both discrete and continuous features upon generation.
Direct Evolutionary Optimization of Variational Autoencoders With Binary Latents
TLDR
The studied approach shows that training of VAEs is indeed possible without sampling-based approximation and reparameterization, and makes VAEs competitive where they have previously been outperformed by non-generative approaches.
Piecewise Latent Variables for Neural Variational Text Processing
TLDR
This work proposes the simple, but highly flexible, piecewise constant distribution, which has the capacity to represent an exponential number of modes of a latent target distribution, while remaining mathematically tractable.
BIVA: A Very Deep Hierarchy of Latent Variables for Generative Modeling
TLDR
This paper introduces the Bidirectional-Inference Variational Autoencoder (BIVA), characterized by a skip-connected generative model and an inference network formed by a bidirectional stochastic inference path, and shows that BIVA reaches state-of-the-art test likelihoods, generates sharp and coherent natural images, and uses the hierarchy of latent variables to capture different aspects of the data distribution.
Expanding variational autoencoders for learning and exploiting latent representations in search distributions
TLDR
It is shown that VAE can capture dependencies between decision variables and objectives, which is proven to improve the sampling capacity of model based EAs and represents a promising direction for the application of generative models within EDAs.
Self-Reflective Variational Autoencoder
TLDR
This work redesigns the hierarchical structure of existing VAE architectures, self-reflection ensures that the stochastic flow preserves the factorization of the exact posterior, sequentially updating the latent codes in a recurrent manner consistent with the generative model.
PixelVAE: A Latent Variable Model for Natural Images
Natural image modeling is a landmark challenge of unsupervised learning. Variational Autoencoders (VAEs) learn a useful latent representation and model global structure well but have difficulty
...
...

References

SHOWING 1-10 OF 59 REFERENCES
A Structured Variational Auto-encoder for Learning Deep Hierarchies of Sparse Features
TLDR
A generative model of natural images consisting of a deep hierarchy of layers of latent random variables, each of which follows a new type of distribution that the authors call rectified Gaussian, allows spike-and-slab type sparsity, while retaining the differentiability necessary for efficient stochastic gradient variational inference.
Ladder Variational Autoencoders
TLDR
A new inference model is proposed, the Ladder Variational Autoencoder, that recursively corrects the generative distribution by a data dependent approximate likelihood in a process resembling the recently proposed Ladder Network.
Variational Gaussian Process
TLDR
The variational Gaussian process is constructed, a Bayesian nonparametric model which adapts its shape to match complex posterior distributions, and is proved a universal approximation theorem for the VGP, demonstrating its representative power for learning any model.
Auto-Encoding Variational Bayes
TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Deep AutoRegressive Networks
TLDR
An efficient approximate parameter estimation method based on the minimum description length (MDL) principle is derived, which can be seen as maximising a variational lower bound on the log-likelihood, with a feedforward neural network implementing approximate inference.
Stochastic Backpropagation through Mixture Density Distributions
TLDR
Together with the reparameterization trick applied to the individual mixture components, this estimator makes it straightforward to train variational autoencoders with mixture-distributed latent variables, or to perform stochastic variational inference with a mixture density variational posterior.
Neural Variational Inference and Learning in Belief Networks
TLDR
This work proposes a fast non-iterative approximate inference method that uses a feedforward network to implement efficient exact sampling from the variational posterior and shows that it outperforms the wake-sleep algorithm on MNIST and achieves state-of-the-art results on the Reuters RCV1 document dataset.
Stochastic Backpropagation and Approximate Inference in Deep Generative Models
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
Learning Deep Generative Models with Doubly Stochastic MCMC
TLDR
This work presents doubly stochastic gradient MCMC, a simple and generic method for (approximate) Bayesian inference of deep generative models (DGMs) in a collapsed continuous parameter space that outperforms many state-of-the-art competitors.
...
...