• Corpus ID: 1687220

Improved Techniques for Training GANs

@inproceedings{Salimans2016ImprovedTF,
  title={Improved Techniques for Training GANs},
  author={Tim Salimans and Ian J. Goodfellow and Wojciech Zaremba and Vicki Cheung and Alec Radford and Xi Chen},
  booktitle={NIPS},
  year={2016}
}
We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. [...] Key Result We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.Expand
DeLiGAN: Generative Adversarial Networks for Diverse and Limited Data
TLDR
The proposed DeLiGAN can generate images of handwritten digits, objects and hand-drawn sketches, all using limited amounts of data, and introduces a modified version of inception-score, a measure which has been found to correlate well with human assessment of generated samples.
OpenGAN: Open Set Generative Adversarial Networks
TLDR
This work proposes an open set GAN architecture that is conditioned per-input sample with a feature embedding drawn from a metric space, and shows that classifier performance can be significantly improved by augmenting the training data with OpenGAN samples on classes that are outside of the GAN training distribution.
A Guided Learning Approach for Generative Adversarial Networks
TLDR
The model is called Guided GAN since the autoencoder (guiding network) provides a direction to train the GAN (generative network), which minimizes both the forward and reverse Kullback-Leibler (KL) divergence in a single model, exploiting the complementary statistical properties of the two.
Self-Supervised Feature Learning by Learning to Spot Artifacts
  • S. Jenni, P. Favaro
  • Computer Science
    2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2018
TLDR
A novel self-supervised learning method based on adversarial training to train a discriminator network to distinguish real images from images with synthetic artifacts, and then to extract features from its intermediate layers that can be transferred to other data domains and tasks.
A novel measure to evaluate generative adversarial networks based on direct analysis of generated images
TLDR
The likeness score (LS) is designed to evaluate GAN performance and has been applied to evaluate several typical GANs and compared with two commonly used GAN evaluation methods: IS and FID, and four additional measures.
Multi-Adversarial Variational Autoencoder Nets for Simultaneous Image Generation and Classification
TLDR
MultiAdversarial Variational autoEncoder Networks (MAVENs) is introduced, a novel deep generative model that incorporates an ensemble of discriminators in a VAE-GAN network in order to perform simultaneous adversarial learning and variational inference and is applied to the generation of synthetic images.
Activation Maximization Generative Adversarial Nets
TLDR
A new metric, called AM Score, is proposed to provide a more accurate estimation of the sample quality of generative adversarial nets, and the proposed model also outperforms the baseline methods in the new metric.
On Depth and Complexity of Generative Adversarial Networks
Although generative adversarial networks (GANs) have achieved state-of-the-art results in generating realistic look- ing images, they are often parameterized by neural net- works with relatively few
Inverting the Generator of a Generative Adversarial Network
TLDR
This paper introduces a technique, inversion, to project data samples, specifically images, to the latent space using a pretrained GAN, and demonstrates how the proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets.
Improving Model Compatibility of Generative Adversarial Networks by Boundary Calibration
TLDR
Experimental results demonstrate that Boundary-Calibration GANs not only generate realistic images like original GGANs but also achieves superior model compatibility than the original GAns.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 34 REFERENCES
Generative Moment Matching Networks
TLDR
This work forms a method that generates an independent sample via a single feedforward pass through a multilayer perceptron, as in the recently proposed generative adversarial networks, using MMD to learn to generate codes that can then be decoded to produce samples.
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks
TLDR
A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion.
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
Training generative neural networks via Maximum Mean Discrepancy optimization
TLDR
This work considers training a deep neural network to generate samples from an unknown distribution given i.i.d. data to frame learning as an optimization minimizing a two-sample test statistic, and proves bounds on the generalization error incurred by optimizing the empirical MMD.
Semi-Supervised Learning with Generative Adversarial Networks
TLDR
This work extends Generative Adversarial Networks to the semi-supervised context by forcing the discriminator network to output class labels and shows that this method can be used to create a more data-efficient classifier and that it allows for generating higher quality samples than a regular GAN.
Generating images with recurrent adversarial networks
TLDR
This work proposes a recurrent generative model that can be trained using adversarial training to generate very good image samples, and proposes a way to quantitatively compare adversarial networks by having the generators and discriminators of these networks compete against each other.
Distributional Smoothing with Virtual Adversarial Training
TLDR
When the LDS based regularization was applied to supervised and semi-supervised learning for the MNIST dataset, it outperformed all the training methods other than the current state of the art method, which is based on a highly advanced generative model.
Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks
In this paper we present a method for learning a discriminative classifier from unlabeled or partially labeled data. Our approach is based on an objective function that trades-off mutual information
On distinguishability criteria for estimating generative models
TLDR
It is shown that a variant of NCE, with a dynamic generator network, is equivalent to maximum likelihood estimation, and that the key next step in GAN research is to determine whether GANs converge, and if not, to modify their training algorithm to force convergence.
Distributional Smoothing by Virtual Adversarial Examples
TLDR
By including into objective function the local smoothness of predictive distribution around each training data point, the work of Goodfellow et al. (2015) is extended to the setting of semi-supervised training and current state of the art supervised and semi- supervised methods on the permutation invariant MNIST classification task are eclipse.
...
1
2
3
4
...