• Corpus ID: 3571422

A Variational Inequality Perspective on Generative Adversarial Nets

@article{Gidel2019AVI,
  title={A Variational Inequality Perspective on Generative Adversarial Nets},
  author={Gauthier Gidel and Hugo Berard and Pascal Vincent and Simon Lacoste-Julien},
  journal={ArXiv},
  year={2019},
  volume={abs/1802.10551}
}
Generative adversarial networks (GANs) form a generative modeling approach known for producing appealing samples, but they are notably difficult to train. One common way to tackle this issue has been to propose new formulations of the GAN objective. Yet, surprisingly few studies have looked at optimization methods designed for this adversarial training. In this work, we cast GAN optimization problems in the general variational inequality framework. Tapping into the mathematical programming… 
A Closer Look at the Optimization Landscapes of Generative Adversarial Networks
TLDR
New visualization techniques for the optimization landscapes of GANs are proposed that enable us to study the game vector field resulting from the concatenation of the gradient of both players.
The Unreasonable Effectiveness of Adam on Cycles
Generative adversarial networks (GANs) are state of the art generative models for images and other domains. Training GANs is difficult, although not nearly as difficult as expected given theoretical
Training Generative Adversarial Networks via Stochastic Nash Games.
TLDR
A stochastic relaxed forward-backward algorithm for GANs is proposed and it is shown convergence to an exact solution or to a neighbourhood of it, if the pseudogradient mapping of the game is monotone, and applies to the image generation problem where it observes computational advantages with respect to the extragradient scheme.
Towards a Better Understanding and Regularization of GAN Training Dynamics
TLDR
It is found that in order to ensure a good convergence rate, two factors of the Jacobian in the GAN training dynamics should be simultaneously avoided, which are the Phase Factor and the Conditioning Factor.
Finding Mixed Nash Equilibria of Generative Adversarial Networks
TLDR
A novel algorithmic framework is developed via an infinite-dimensional two-player game and rigorous convergence rates to the mixed NE are proved, resolving the longstanding problem that no provably convergent algorithm exists for general GANs.
Generative Adversarial Networks as stochastic Nash games
TLDR
A stochastic relaxed forward-backward algorithm for GANs is proposed and it is shown convergence to an exact solution or to a neighbourhood of it, if the pseudogradient mapping of the game is monotone, and applies to the image generation problem where it observes computational advantages with respect to the extragradient scheme.
Regularization And Normalization For Generative Adversarial Networks: A Review
TLDR
This paper reviews and summarizes the research in the regularization and normalization for GAN, and classifies the methods into six groups: Gradient penalty, Norm normalization and regularization, Jacobian regularized, Layer normalization, Consistency regularizations, and Self-supervision.
Top-K Training of GANs: Improving Generators by Making Critics Less Critical
TLDR
A simple modification to the Generative Adversarial Network (GAN) training algorithm is introduced that materially improves results with no increase in computational cost: when updating the generator parameters, it is shown that this `top-k update' procedure is a generally applicable improvement.
Revisiting Stochastic Extragradient
TLDR
This work fixes a fundamental issue in the stochastic extragradient method by providing a new sampling strategy that is motivated by approximating implicit updates, and proves guarantees for solving variational inequality that go beyond existing settings.
Reducing Noise in GAN Training with Variance Reduced Extragradient
TLDR
A novel stochastic variance-reduced extragradient optimization algorithm, which for a large class of games improves upon the previous convergence rates proposed in the literature.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 70 REFERENCES
Dualing GANs
TLDR
This paper explores ways to tackle the instability problem of GAN training by dualizing the discriminator, starting from linear discriminators and demonstrating how to extend this intuition to non-linear formulations.
An Online Learning Approach to Generative Adversarial Networks
TLDR
A novel training method named Chekhov GAN is proposed and it is shown that this method provably converges to an equilibrium for semi-shallow GAN architectures, i.e. architectures where the discriminator is a one layer network and the generator is arbitrary.
Improved Training of Wasserstein GANs
TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.
Generalization and equilibrium in generative adversarial nets (GANs) (invited talk)
Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization
TLDR
It is shown that any f-divergence can be used for training generative neural samplers and the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models are discussed.
Gradient descent GAN optimization is locally stable
TLDR
This paper analyzes the "gradient descent" form of GAN optimization i.e., the natural setting where the authors simultaneously take small gradient steps in both generator and discriminator parameters, and proposes an additional regularization term for gradient descent GAN updates that is able to guarantee local stability for both the WGAN and the traditional GAN.
Stabilizing Adversarial Nets With Prediction Methods
TLDR
It is shown, both in theory and practice, that the proposed method reliably converges to saddle points, and is stable with a wider range of training parameters than a non-prediction method, which makes adversarial networks less likely to "collapse," and enables faster training with larger learning rates.
NIPS 2016 Tutorial: Generative Adversarial Networks
TLDR
This report summarizes the tutorial presented by the author at NIPS 2016 on generative adversarial networks (GANs), and describes state-of-the-art image models that combine GANs with other methods.
Unrolled Generative Adversarial Networks
TLDR
This work introduces a method to stabilize Generative Adversarial Networks by defining the generator objective with respect to an unrolled optimization of the discriminator, and shows how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.
Adversarial Divergences are Good Task Losses for Generative Modeling
TLDR
It is argued that adversarial learning, pioneered with generative adversarial networks (GANs), provides an interesting framework to implicitly define more meaningful task losses for generative modeling tasks, such as for generating "visually realistic" images.
...
1
2
3
4
5
...