Corpus ID: 1033682

Generative Adversarial Nets

@inproceedings{Goodfellow2014GenerativeAN,
  title={Generative Adversarial Nets},
  author={Ian J. Goodfellow and Jean Pouget-Abadie and Mehdi Mirza and Bing Xu and David Warde-Farley and Sherjil Ozair and Aaron C. Courville and Yoshua Bengio},
  booktitle={NIPS},
  year={2014}
}
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. [...] Key Result Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of…Expand
Generative Moment Matching Networks
TLDR
This work forms a method that generates an independent sample via a single feedforward pass through a multilayer perceptron, as in the recently proposed generative adversarial networks, using MMD to learn to generate codes that can then be decoded to produce samples. Expand
Probabilistic Generative Adversarial Networks
TLDR
The central idea is to integrate a probabilistic model (a Gaussian Mixture Model) into the GAN framework which supports a new kind of loss function (based on likelihood rather than classification loss), and at the same time gives a meaningful measure of the quality of the outputs generated by the network. Expand
Adaptive Density Estimation for Generative Models
TLDR
This work shows that their model significantly improves over existing hybrid models: offering GAN-like samples, IS and FID scores that are competitive with fully adversarial models and improved likelihood scores. Expand
A Framework of Composite Functional Gradient Methods for Generative Adversarial Models
  • Rie Johnson, T. Zhang
  • Computer Science, Medicine
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2021
TLDR
The theory shows that with a strong discriminator, a good generator can be obtained by composite functional gradient learning, so that several distance measures between the probability distributions of real data and generated data are simultaneously improved after each functional gradient step until converging to zero. Expand
AdaGAN: Boosting Generative Models
TLDR
An iterative procedure, called AdaGAN, is proposed, where at every step the authors add a new component into a mixture model by running a GAN algorithm on a re-weighted sample by inspired by boosting algorithms. Expand
Inverting the Generator of a Generative Adversarial Network
TLDR
This paper introduces a technique, inversion, to project data samples, specifically images, to the latent space using a pretrained GAN, and demonstrates how the proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets. Expand
Partially Conditioned Generative Adversarial Networks
TLDR
This work argues that standard Conditional GANs are not suitable for such a task and proposes a new Adversarial Network architecture and training strategy to deal with the ensuing problems and demonstrates the value of the proposed approach in digit and face image synthesis under partial conditioning information. Expand
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization
TLDR
It is shown that any f-divergence can be used for training generative neural samplers and the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models are discussed. Expand
An Online Learning Approach to Generative Adversarial Networks
TLDR
A novel training method named Chekhov GAN is proposed and it is shown that this method provably converges to an equilibrium for semi-shallow GAN architectures, i.e. architectures where the discriminator is a one layer network and the generator is arbitrary. Expand
Training Generative Adversarial Networks via Stochastic Nash Games.
TLDR
A stochastic relaxed forward-backward algorithm for GANs is proposed and it is shown convergence to an exact solution or to a neighbourhood of it, if the pseudogradient mapping of the game is monotone, and applies to the image generation problem where it observes computational advantages with respect to the extragradient scheme. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 41 REFERENCES
Deep Generative Stochastic Networks Trainable by Backprop
TLDR
Theorems that generalize recent work on the probabilistic interpretation of denoising autoencoders are provided and obtain along the way an interesting justification for dependency networks and generalized pseudolikelihood. Expand
A Generative Process for sampling Contractive Auto-Encoders
TLDR
A procedure for generating samples that are consistent with the local structure captured by a contractive auto-encoder and which experimentally appears to converge quickly and mix well between modes, compared to Restricted Boltzmann Machines and Deep Belief Networks is proposed. Expand
Learning Generative Models via Discriminative Approaches
  • Zhuowen Tu
  • Computer Science
  • 2007 IEEE Conference on Computer Vision and Pattern Recognition
  • 2007
TLDR
A new learning framework is proposed in this paper which progressively learns a target generative distribution through discriminative approaches, which improves the modeling capability of discrim inative models and improves robustness. Expand
Generalized Denoising Auto-Encoders as Generative Models
TLDR
A different attack on the problem is proposed, which deals with arbitrary (but noisy enough) corruption, arbitrary reconstruction loss, handling both discrete and continuous-valued variables, and removing the bias due to non-infinitesimal corruption noise. Expand
Deep AutoRegressive Networks
TLDR
An efficient approximate parameter estimation method based on the minimum description length (MDL) principle is derived, which can be seen as maximising a variational lower bound on the log-likelihood, with a feedforward neural network implementing approximate inference. Expand
Learning Multiple Layers of Features from Tiny Images
TLDR
It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network. Expand
A Fast Learning Algorithm for Deep Belief Nets
TLDR
A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. Expand
Neural Variational Inference and Learning in Belief Networks
TLDR
This work proposes a fast non-iterative approximate inference method that uses a feedforward network to implement efficient exact sampling from the variational posterior and shows that it outperforms the wake-sleep algorithm on MNIST and achieves state-of-the-art results on the Reuters RCV1 document dataset. Expand
Auto-Encoding Variational Bayes
TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced. Expand
Stochastic Backpropagation and Approximate Inference in Deep Generative Models
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference andExpand
...
1
2
3
4
5
...