Corpus ID: 14570343

Learning in Implicit Generative Models

@article{Mohamed2016LearningII,
  title={Learning in Implicit Generative Models},
  author={Shakir Mohamed and Balaji Lakshminarayanan},
  journal={ArXiv},
  year={2016},
  volume={abs/1610.03483}
}
Generative adversarial networks (GANs) provide an algorithmic framework for constructing generative models with several appealing properties: they do not require a likelihood function to be specified, only a generating procedure; they provide samples that are sharp and compelling; and they allow us to harness our knowledge of building highly accurate neural network classifiers. Here, we develop our understanding of GANs with the aim of forming a rich view of this growing area of machine… Expand
Flow-GAN: Bridging implicit and prescribed learning in generative models
TLDR
This work proposes Flow-GANs, a generative adversarial network with the generator specified as a normalizing flow model which can perform exact likelihood evaluation and shows empirically the benefits of Flow-gans on MNIST and CIFAR-10 datasets in learning generative models that can attain low generalization error based on the log-likelihoods and generate high quality samples. Expand
GOOD TASK LOSSES FOR GENERATIVE MODELING
Generative modeling of high dimensional data like images is a notoriously difficult and ill-defined problem. In particular, how to evaluate a learned generative model is unclear. In this paper, weExpand
Learning Generative Models using Transformations
TLDR
This thesis shows an example of incorporating asimple yet fairly representative renderer developed in computer graphics into IGM transformations for generating realistic and highly structured body data, which paves a new path of learning IGMs and proposes a new generic algorithm that can be built on the top of many existing approaches and bring performance improvement over the state-of-the-art. Expand
Variational Inference using Implicit Distributions
TLDR
This paper provides a unifying review of existing algorithms establishing connections between variational autoencoders, adversarially learned inference, operator VI, GAN-based image reconstruction, and more, and provides a framework for building new algorithms. Expand
Variational Approaches for Auto-Encoding Generative Adversarial Networks
TLDR
This paper develops a principle upon which auto-encoders can be combined with generative adversarial networks by exploiting the hierarchical structure of the generative model, and describes a unified objective for optimization. Expand
Adversarial Message Passing For Graphical Models
TLDR
This work treats GANs as a basis for likelihood-free inference in generative models and generalizes them to Bayesian posterior inference over factor graphs, finding that Bayesian inference on structured models can be performed only with sampling and discrimination when using nonparametric variational families, without access to explicit distributions. Expand
Generative models for natural images
TLDR
This thesis discusses modern generative modelling of natural images based on neural networks, and finds that VAEs are the most promising, although their overall performance leaves a lot of room for improvement. Expand
Learning Implicit Generative Models by Teaching Explicit Ones
TLDR
This paper presents a learning by teaching (LBT) approach to learning implicit models, which intrinsically avoids the mode collapse problem by optimizing a KL-divergence rather than the JS-Divergence in GANs. Expand
Adversarial Divergences are Good Task Losses for Generative Modeling
TLDR
It is argued that adversarial learning, pioneered with generative adversarial networks (GANs), provides an interesting framework to implicitly define more meaningful task losses for generative modeling tasks, such as for generating "visually realistic" images. Expand
Learning the Base Distribution in Implicit Generative Models
TLDR
This paper argues that learning a complicated distribution over the latent space of an auto-encoder enables more accurate modeling of complicated data distributions, and proposes a two stage optimization procedure which maximizes an approximate implicit density model. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 77 REFERENCES
Adversarial Message Passing For Graphical Models
TLDR
This work treats GANs as a basis for likelihood-free inference in generative models and generalizes them to Bayesian posterior inference over factor graphs, finding that Bayesian inference on structured models can be performed only with sampling and discrimination when using nonparametric variational families, without access to explicit distributions. Expand
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aExpand
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization
TLDR
It is shown that any f-divergence can be used for training generative neural samplers and the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models are discussed. Expand
Generative Moment Matching Networks
TLDR
This work forms a method that generates an independent sample via a single feedforward pass through a multilayer perceptron, as in the recently proposed generative adversarial networks, using MMD to learn to generate codes that can then be decoded to produce samples. Expand
Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks
TLDR
Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihood-problem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs). Expand
On the Quantitative Analysis of Decoder-Based Generative Models
TLDR
This work proposes to use Annealed Importance Sampling for evaluating log-likelihoods for decoder-based models and validate its accuracy using bidirectional Monte Carlo, and analyzes the performance of decoded models, the effectiveness of existing log- likelihood estimators, the degree of overfitting, and the degree to which these models miss important modes of the data distribution. Expand
Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy
TLDR
This optimized MMD is applied to the setting of unsupervised learning by generative adversarial networks (GAN), in which a model attempts to generate realistic samples, and a discriminator attempts to tell these apart from data samples. Expand
Training generative neural networks via Maximum Mean Discrepancy optimization
TLDR
This work considers training a deep neural network to generate samples from an unknown distribution given i.i.d. data to frame learning as an optimization minimizing a two-sample test statistic, and proves bounds on the generalization error incurred by optimizing the empirical MMD. Expand
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes. Expand
A note on the evaluation of generative models
TLDR
This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models and shows that three of the currently most commonly used criteria---average log-likelihood, Parzen window estimates, and visual fidelity of samples---are largely independent of each other when the data is high-dimensional. Expand
...
1
2
3
4
5
...