• Corpus ID: 4053393

Are GANs Created Equal? A Large-Scale Study

@inproceedings{Lucic2018AreGC,
  title={Are GANs Created Equal? A Large-Scale Study},
  author={Mario Lucic and Karol Kurach and Marcin Michalski and Sylvain Gelly and Olivier Bousquet},
  booktitle={NeurIPS},
  year={2018}
}
Generative adversarial networks (GAN) are a powerful subclass of generative models. Despite a very rich research activity leading to numerous interesting GAN algorithms, it is still very hard to assess which algorithm(s) perform better than others. We conduct a neutral, multi-faceted large-scale empirical study on state-of-the art models and evaluation measures. We find that most models can reach similar scores with enough hyperparameter optimization and random restarts. This suggests that… 

Figures and Tables from this paper

On the Evaluation of Generative Adversarial Networks By Discriminative Models
TLDR
This work uses Siamese neural networks to propose a domain-agnostic evaluation metric that is robust relative to common GAN issues such as mode dropping and invention, and does not require any pretrained classifier.
Quality Evaluation of GANs Using Cross Local Intrinsic Dimensionality
TLDR
It is demonstrated that an intrinsic dimensional characterization of the data space learned by a GAN model leads to an effective evaluation metric for GAN quality, and a new evaluation measure, CrossLID, is proposed that assesses the local intrinsic dimensionality (LID) of real-world data with respect to neighborhoods found in GAN-generated samples.
Prb-GAN: A Probabilistic Framework for GAN Modelling
TLDR
Pb-GANs is presented, a new variation that uses dropout to create a distribution over the network parameters with the posterior learnt using variational inference, which is extremely simple and require very little modification to existing GAN architecture.
Consistency Regularization for Generative Adversarial Networks
TLDR
This work proposes a simple, effective training stabilizer based on the notion of consistency regularization, which improves state-of-the-art FID scores for conditional generation and achieves the best F ID scores for unconditional image generation compared to other regularization methods on CIFAR-10 and CelebA.
An Empirical Comparison of Generative Adversarial Network (GAN) Measures
TLDR
This work discovers the proper dimension of latent space and compares FID and IS that are implemented for evaluation of generated data distribution and compares these two metrics concerning the improved GAN models.
A Large-Scale Study on Regularization and Normalization in GANs
TLDR
This work takes a sober view of the current state of GANs from a practical perspective, discusses and evaluates common pitfalls and reproducibility issues, and open-source the code on Github and provide pre-trained models on TensorFlow Hub.
CAGAN: Consistent Adversarial Training Enhanced GANs
TLDR
This paper proposes a novel approach of adversarial training between one generator and an exponential number of critics which are sampled from the original discriminative neural network via dropout, and demonstrates that the method can maintain stability in training and alleviate mode collapse.
A Step Beyond Generative Multi-adversarial Networks
TLDR
The structure is modified and a new formulation is introduced based on the discriminating capability of the Generative Multi-Adversarial Network (GMAN), which is a variation of GANs, to improve the performance of theGenerative adversarial networks.
How good is my GAN?
TLDR
This paper introduces two measures based on image classification—GAN-train and GAN-test, which approximate the recall (diversity) and precision (quality of the image) of GANs respectively and evaluates a number of recent GAN approaches based on these two measures and demonstrates a clear difference in performance.
...
...

References

SHOWING 1-10 OF 29 REFERENCES
Do GANs learn the distribution? Some Theory and Empirics
Improved Training of Wasserstein GANs
TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.
Assessing Generative Models via Precision and Recall
TLDR
A novel definition of precision and recall for distributions which disentangles the divergence into two separate dimensions is proposed which is intuitive, retains desirable properties, and naturally leads to an efficient algorithm that can be used to evaluate generative models.
Generalization and equilibrium in generative adversarial nets (GANs) (invited talk)
Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good
The GAN Landscape: Losses, Architectures, Regularization, and Normalization
TLDR
This work reproduces the current state of the art of GANs from a practical perspective, discusses common pitfalls and reproducibility issues, and goes beyond fairly exploring the GAN landscape.
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.
On the Quantitative Analysis of Decoder-Based Generative Models
TLDR
This work proposes to use Annealed Importance Sampling for evaluating log-likelihoods for decoder-based models and validate its accuracy using bidirectional Monte Carlo, and analyzes the performance of decoded models, the effectiveness of existing log- likelihood estimators, the degree of overfitting, and the degree to which these models miss important modes of the data distribution.
How (not) to Train your Generative Model: Scheduled Sampling, Likelihood, Adversary?
TLDR
This paper presents a critique of scheduled sampling, a state-of-the-art training method that contributed to the winning entry to the MSCOCO image captioning benchmark in 2015, and presents the first theoretical analysis that explains why adversarial training tends to produce samples with higher perceived quality.
Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step
TLDR
It is demonstrated that GANs are able to learn distributions in situations where the divergence minimization point of view predicts they would fail, and it is contributed to a growing body of evidence thatGAN training may be more usefully viewed as approaching Nash equilibria via trajectories that do not necessarily minimize a specific divergence at each step.
GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium
TLDR
This work proposes a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions and introduces the "Frechet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score.
...
...