• Corpus ID: 1033682

Generative Adversarial Nets

@inproceedings{Goodfellow2014GenerativeAN,
  title={Generative Adversarial Nets},
  author={Ian J. Goodfellow and Jean Pouget-Abadie and Mehdi Mirza and Bing Xu and David Warde-Farley and Sherjil Ozair and Aaron C. Courville and Yoshua Bengio},
  booktitle={NIPS},
  year={2014}
}
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. [] Key Result Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of…

Figures and Tables from this paper

Probabilistic Generative Adversarial Networks

The central idea is to integrate a probabilistic model (a Gaussian Mixture Model) into the GAN framework which supports a new kind of loss function (based on likelihood rather than classification loss), and at the same time gives a meaningful measure of the quality of the outputs generated by the network.

Adaptive Density Estimation for Generative Models

This work shows that their model significantly improves over existing hybrid models: offering GAN-like samples, IS and FID scores that are competitive with fully adversarial models and improved likelihood scores.

Analyzing and Improving Adversarial Training for Generative Modeling

This AT generative model achieves competitive image generation performance to state-of-the-art EBMs, and at the same time is stable to train and has better sampling efficiency and is well-suited for the task of image translation and worst-case out- of-distribution detection.

Generative Adversarial Nets: Can we generate a new dataset based on only one training set?

This work aims to generate a new dataset that has a different distribution from the training set, and finds the Jensen-Shannon divergence between the distributions of the generative and training datasets can be controlled by some target δ ∈ [0 , 1] .

Hierarchical Mixtures of Generators for Adversarial Learning

This work proposes the hierarchical mixture of generators, inspired from the hierarchical mix of experts model, that learns a tree structure implementing a hierarchical clustering with soft splits in the decision nodes and local generators in the leaves, just like the original GAN model.

A Framework of Composite Functional Gradient Methods for Generative Adversarial Models

  • Rie JohnsonT. Zhang
  • Computer Science
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2021
The theory shows that with a strong discriminator, a good generator can be obtained by composite functional gradient learning, so that several distance measures between the probability distributions of real data and generated data are simultaneously improved after each functional gradient step until converging to zero.

Imitating Generative Adversarial Networks with Humans

Experiments demonstrate that humans can converge in performance with a small set of queries and show potential for systems in which one or both components of a GAN can be replaced by humans.

AdaGAN: Boosting Generative Models

An iterative procedure, called AdaGAN, is proposed, where at every step the authors add a new component into a mixture model by running a GAN algorithm on a re-weighted sample by inspired by boosting algorithms.

Inverting the Generator of a Generative Adversarial Network

This paper introduces a technique, inversion, to project data samples, specifically images, to the latent space using a pretrained GAN, and demonstrates how the proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets.

Partially Conditioned Generative Adversarial Networks

This work argues that standard Conditional GANs are not suitable for such a task and proposes a new Adversarial Network architecture and training strategy to deal with the ensuing problems and demonstrates the value of the proposed approach in digit and face image synthesis under partial conditioning information.
...

References

SHOWING 1-10 OF 35 REFERENCES

Deep Generative Stochastic Networks Trainable by Backprop

Theorems that generalize recent work on the probabilistic interpretation of denoising autoencoders are provided and obtain along the way an interesting justification for dependency networks and generalized pseudolikelihood.

A Generative Process for sampling Contractive Auto-Encoders

A procedure for generating samples that are consistent with the local structure captured by a contractive auto-encoder and which experimentally appears to converge quickly and mix well between modes, compared to Restricted Boltzmann Machines and Deep Belief Networks is proposed.

Learning Generative Models via Discriminative Approaches

  • Z. Tu
  • Computer Science
    2007 IEEE Conference on Computer Vision and Pattern Recognition
  • 2007
A new learning framework is proposed in this paper which progressively learns a target generative distribution through discriminative approaches, which improves the modeling capability of discrim inative models and improves robustness.

Deep AutoRegressive Networks

An efficient approximate parameter estimation method based on the minimum description length (MDL) principle is derived, which can be seen as maximising a variational lower bound on the log-likelihood, with a feedforward neural network implementing approximate inference.

Learning Multiple Layers of Features from Tiny Images

It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.

A Fast Learning Algorithm for Deep Belief Nets

A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.

Neural Variational Inference and Learning in Belief Networks

This work proposes a fast non-iterative approximate inference method that uses a feedforward network to implement efficient exact sampling from the variational posterior and shows that it outperforms the wake-sleep algorithm on MNIST and achieves state-of-the-art results on the Reuters RCV1 document dataset.

Auto-Encoding Variational Bayes

A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.

Stochastic Backpropagation and Approximate Inference in Deep Generative Models

We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and

Deep Boltzmann Machines

A new learning algorithm for Boltzmann machines that contain many layers of hidden variables that is made more efficient by using a layer-by-layer “pre-training” phase that allows variational inference to be initialized with a single bottomup pass.