Assisting the Adversary to Improve GAN Training

@article{Munk2020AssistingTA,
  title={Assisting the Adversary to Improve GAN Training},
  author={Andreas Munk and William Harvey and Frank D. Wood},
  journal={2021 International Joint Conference on Neural Networks (IJCNN)},
  year={2020},
  pages={1-8}
}
Some of the most popular methods for improving the stability and performance of GANs involve constraining or regularizing the discriminator. In this paper we consider a largely overlooked regularization technique which we refer to as the Adversary's Assistant (AdvAs). We motivate this using a different perspective to that of prior work. Specifically, we consider a common mismatch between theoretical analysis and practice: analysis often assumes that the discriminator reaches its optimum on each… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 42 REFERENCES

An Online Learning Approach to Generative Adversarial Networks

A novel training method named Chekhov GAN is proposed and it is shown that this method provably converges to an equilibrium for semi-shallow GAN architectures, i.e. architectures where the discriminator is a one layer network and the generator is arbitrary.

Improved Training of Wasserstein GANs

This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.

Improved Techniques for Training GANs

This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.

MMD GAN: Towards Deeper Understanding of Moment Matching Network

In the evaluation on multiple benchmark datasets, including MNIST, CIFAR- 10, CelebA and LSUN, the performance of MMD-GAN significantly outperforms GMMN, and is competitive with other representative GAN works.

AutoGAN: Neural Architecture Search for Generative Adversarial Networks

This paper presents the first preliminary study on introducing the NAS algorithm to generative adversarial networks (GANs), dubbed AutoGAN, and discovers architectures that achieve highly competitive performance compared to current state-of-the-art hand-crafted GANs.

Least Squares Generative Adversarial Networks

This paper proposes the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator, and shows that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence.

GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium

This work proposes a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions and introduces the "Frechet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score.

Stabilizing Training of Generative Adversarial Networks through Regularization

This work proposes a new regularization approach with low computational cost that yields a stable GAN training procedure and demonstrates the effectiveness of this regularizer accross several architectures trained on common benchmark image generation tasks.

Geometric GAN

This paper shows that the adversarial generative model training can be decomposed into three geometric steps: separating hyperplane search, discriminator parameter update away from the separatingHyperplane, and the generator update along the normal vector direction of the separatinghyperplane.

Generative Adversarial Nets

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a