• Corpus ID: 6104263

Adversarially Learned Inference

@article{Dumoulin2016AdversariallyLI,
  title={Adversarially Learned Inference},
  author={Vincent Dumoulin and Ishmael Belghazi and Ben Poole and Alex Lamb and Mart{\'i}n Arjovsky and Olivier Mastropietro and Aaron C. Courville},
  journal={ArXiv},
  year={2016},
  volume={abs/1606.00704}
}
We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process. [] Key Method An adversarial game is cast between these two networks and a discriminative network is trained to distinguish between joint latent/data-space samples from the generative network and joint samples from the inference network. We illustrate the ability of the model to learn mutually coherent inference and generation networks through the…

Adversarial Feature Learning

Bidirectional Generative Adversarial Networks are proposed as a means of learning the inverse mapping of GANs, and it is demonstrated that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.

Decomposed Adversarial Learned Inference

A novel approach, Decomposed Adversarial Learned Inference (DALI), which explicitly matches prior and conditional distributions in both data and code spaces, and puts a direct constraint on the dependency structure of the generative model.

Inferential Wasserstein generative adversarial networks

A novel inferential Wasserstein GAN (iWGAN) model is introduced, which is a principled framework to fuse autoencoders and WGANs and has many advantages over other autoencoder GANs.

Iterative Adversarial Inference with Re-Inference Chain for Deep Graphical Models

It is shown empirically that RGNet surpasses GibbsNet in the quality of inferred latent variables and achieves comparable performance on image generation and inpainting tasks.

IGAN: Inferent and Generative Adversarial Networks

IGAN (Inferent Generative Adversarial Networks), a neural architecture that learns both a generative and an inference model on a complex high dimensional data distribution, i.e. a bidirectional mapping between data samples and a simpler low-dimensional latent space, brings a measurable stability and convergence to the classical GAN scheme.

Inverting the Generator of a Generative Adversarial Network

This paper introduces a technique, inversion, to project data samples, specifically images, to the latent space using a pretrained GAN, and demonstrates how the proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets.

Graphical Generative Adversarial Networks

This work introduces a structured recognition model to infer the posterior distribution of latent variables given observations and generalizes the Expectation Propagation (EP) algorithm to learn the generative model and recognition model jointly.

Coupled Learning for Image Generation and Latent Representation Inference Using MMD

The proposed unsupervised learning framework is competitive on image generation and latent representation inference of images compared with representative approaches and imposes structural regularization that the two networks are inverses of each other, so that the learning of these two distributions can be coupled.

IVE-GAN: Invariant Encoding Generative Adversarial Networks

Invariant Encoding Generative Adversarial Networks (IVE-GANs) is proposed, a novel GAN framework that introduces such a mapping for individual samples from the data by utilizing features in the data which are invariant to certain transformations.

Improving Multi-Agent Generative Adversarial Nets with Variational Latent Representation

A new model design is introduced, called the encoded multi-agent generative adversarial network (E-MGAN), which tackles the mode collapse problem by introducing the variational latent representations learned from a variable auto-encoder (VAE) to a multi- agent GAN.
...

References

SHOWING 1-10 OF 45 REFERENCES

Adversarial Feature Learning

Bidirectional Generative Adversarial Networks are proposed as a means of learning the inverse mapping of GANs, and it is demonstrated that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.

Generative Adversarial Nets

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a

Adversarial Autoencoders

This paper shows how the adversarial autoencoder can be used in applications such as semi-supervised classification, disentangling style and content of images, unsupervised clustering, dimensionality reduction and data visualization, and performed experiments on MNIST, Street View House Numbers and Toronto Face datasets.

InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets

Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.

Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks

In this paper we present a method for learning a discriminative classifier from unlabeled or partially labeled data. Our approach is based on an objective function that trades-off mutual information

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.

Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks

A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion.

Auxiliary Deep Generative Models

This work extends deep generative models with auxiliary variables which improves the variational approximation and proposes a model with two stochastic layers and skip connections which shows state-of-the-art performance within semi-supervised learning on MNIST, SVHN and NORB datasets.

Deep Generative Stochastic Networks Trainable by Backprop

Theorems that generalize recent work on the probabilistic interpretation of denoising autoencoders are provided and obtain along the way an interesting justification for dependency networks and generalized pseudolikelihood.

Auto-Encoding Variational Bayes

A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.