• Corpus ID: 21331622

Inverting Variational Autoencoders for Improved Generative Accuracy

@article{Gemp2016InvertingVA,
  title={Inverting Variational Autoencoders for Improved Generative Accuracy},
  author={Ian M. Gemp and Ishan Durugkar and Mario Parente and Melinda Darby Dyar and Sridhar Mahadevan},
  journal={arXiv: Learning},
  year={2016}
}
Recent advances in semi-supervised learning with deep generative models have shown promise in generalizing from small labeled datasets ($\mathbf{x},\mathbf{y}$) to large unlabeled ones ($\mathbf{x}$). In the case where the codomain has known structure, a large unfeatured dataset ($\mathbf{y}$) is potentially available. We develop a parameter-efficient, deep semi-supervised generative model for the purpose of exploiting this untapped data source. Empirical results show improved performance in… 

Figures and Tables from this paper

CVA2E: A Conditional Variational Autoencoder With an Adversarial Training Process for Hyperspectral Imagery Classification

TLDR
A novel generative model named the conditional variational autoencoder with an adversarial training process (CVA2E) is proposed for hyperspectral imagery classification by combining variational inference and an adversarian training process in the spectral sample generation.

Generative Adversarial Networks for Realistic Synthesis of Hyperspectral Samples

TLDR
This work addresses the scarcity of annotated hyperspectral data required to train deep neural networks by training generative adversarial networks on public datasets and shows that these models are not only able to capture the underlying distribution, but also to generate genuine-looking and physically plausible spectra.

Classification de données massives de télédétection

La multiplication des sources de donnees et la mise a disposition de systemes d'imagerie a haute resolution a fait rentrer l'observation de la Terre dans le monde du big data. Cela a permis

References

SHOWING 1-10 OF 31 REFERENCES

Semi-supervised Learning with Deep Generative Models

TLDR
It is shown that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning.

Auxiliary Deep Generative Models

TLDR
This work extends deep generative models with auxiliary variables which improves the variational approximation and proposes a model with two stochastic layers and skip connections which shows state-of-the-art performance within semi-supervised learning on MNIST, SVHN and NORB datasets.

Bottleneck Conditional Density Estimation

TLDR
It is shown that the hybrid training procedure enables models to achieve competitive results in the MNIST quadrant prediction task in the fully-supervised setting, and sets new benchmarks in the semi- supervised regime for MNIST, SVHN, and CelebA.

beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework

Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial

PixelVAE: A Latent Variable Model for Natural Images

Natural image modeling is a landmark challenge of unsupervised learning. Variational Autoencoders (VAEs) learn a useful latent representation and model global structure well but have difficulty

Generative Adversarial Nets

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a

Auto-Encoding Variational Bayes

TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.

Neural Variational Inference for Text Processing

TLDR
This paper introduces a generic variational inference framework for generative and conditional models of text, and constructs an inference network conditioned on the discrete text input to provide the variational distribution.

Categorical Reparameterization with Gumbel-Softmax

TLDR
It is shown that the Gumbel-Softmax estimator outperforms state-of-the-art gradient estimators on structured output prediction and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classification.

Weakly- and Semi-Supervised Learning of a DCNN for Semantic Image Segmentation

TLDR
Expectation-Maximization (EM) methods for semantic image segmentation model training under weakly supervised and semi-supervised settings are developed and extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentsation benchmark, while requiring significantly less annotation effort.