# A Variational Inequality Perspective on Generative Adversarial Nets

@article{Gidel2019AVI, title={A Variational Inequality Perspective on Generative Adversarial Nets}, author={Gauthier Gidel and Hugo Berard and Pascal Vincent and Simon Lacoste-Julien}, journal={ArXiv}, year={2019}, volume={abs/1802.10551} }

Generative adversarial networks (GANs) form a generative modeling approach known for producing appealing samples, but they are notably difficult to train. One common way to tackle this issue has been to propose new formulations of the GAN objective. Yet, surprisingly few studies have looked at optimization methods designed for this adversarial training. In this work, we cast GAN optimization problems in the general variational inequality framework. Tapping into the mathematical programming…

## Figures, Tables, and Topics from this paper

## 184 Citations

A Closer Look at the Optimization Landscapes of Generative Adversarial Networks

- Computer Science, MathematicsICLR
- 2020

New visualization techniques for the optimization landscapes of GANs are proposed that enable us to study the game vector field resulting from the concatenation of the gradient of both players.

The Unreasonable Effectiveness of Adam on Cycles

- 2019

Generative adversarial networks (GANs) are state of the art generative models for images and other domains. Training GANs is difficult, although not nearly as difficult as expected given theoretical…

Training Generative Adversarial Networks via Stochastic Nash Games.

- Medicine, Computer ScienceIEEE transactions on neural networks and learning systems
- 2021

A stochastic relaxed forward-backward algorithm for GANs is proposed and it is shown convergence to an exact solution or to a neighbourhood of it, if the pseudogradient mapping of the game is monotone, and applies to the image generation problem where it observes computational advantages with respect to the extragradient scheme.

Towards a Better Understanding and Regularization of GAN Training Dynamics

- Computer ScienceUAI
- 2019

It is found that in order to ensure a good convergence rate, two factors of the Jacobian in the GAN training dynamics should be simultaneously avoided, which are the Phase Factor and the Conditioning Factor.

Finding Mixed Nash Equilibria of Generative Adversarial Networks

- Computer Science, MathematicsICML
- 2019

A novel algorithmic framework is developed via an infinite-dimensional two-player game and rigorous convergence rates to the mixed NE are proved, resolving the longstanding problem that no provably convergent algorithm exists for general GANs.

Generative Adversarial Networks as stochastic Nash games

- Computer ScienceArXiv
- 2020

A stochastic relaxed forward-backward algorithm for GANs is proposed and it is shown convergence to an exact solution or to a neighbourhood of it, if the pseudogradient mapping of the game is monotone, and applies to the image generation problem where it observes computational advantages with respect to the extragradient scheme.

Regularization And Normalization For Generative Adversarial Networks: A Review

- Computer ScienceArXiv
- 2020

This paper reviews and summarizes the research in the regularization and normalization for GAN, and classifies the methods into six groups: Gradient penalty, Norm normalization and regularization, Jacobian regularized, Layer normalization, Consistency regularizations, and Self-supervision.

Top-K Training of GANs: Improving Generators by Making Critics Less Critical

- Mathematics, Computer ScienceArXiv
- 2020

A simple modification to the Generative Adversarial Network (GAN) training algorithm is introduced that materially improves results with no increase in computational cost: when updating the generator parameters, it is shown that this `top-k update' procedure is a generally applicable improvement.

Revisiting Stochastic Extragradient

- Mathematics, Computer ScienceAISTATS
- 2020

This work fixes a fundamental issue in the stochastic extragradient method by providing a new sampling strategy that is motivated by approximating implicit updates, and proves guarantees for solving variational inequality that go beyond existing settings.

Reducing Noise in GAN Training with Variance Reduced Extragradient

- Computer Science, MathematicsNeurIPS
- 2019

A novel stochastic variance-reduced extragradient optimization algorithm, which for a large class of games improves upon the previous convergence rates proposed in the literature.

## References

SHOWING 1-10 OF 70 REFERENCES

Dualing GANs

- Computer Science, MathematicsNIPS
- 2017

This paper explores ways to tackle the instability problem of GAN training by dualizing the discriminator, starting from linear discriminators and demonstrating how to extend this intuition to non-linear formulations.

An Online Learning Approach to Generative Adversarial Networks

- Computer Science, MathematicsICLR
- 2018

A novel training method named Chekhov GAN is proposed and it is shown that this method provably converges to an equilibrium for semi-shallow GAN architectures, i.e. architectures where the discriminator is a one layer network and the generator is arbitrary.

Improved Training of Wasserstein GANs

- Computer Science, MathematicsNIPS
- 2017

This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.

Generalization and equilibrium in generative adversarial nets (GANs) (invited talk)

- Computer Science, MathematicsICML
- 2017

Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good…

f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization

- Computer Science, MathematicsNIPS
- 2016

It is shown that any f-divergence can be used for training generative neural samplers and the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models are discussed.

Gradient descent GAN optimization is locally stable

- Computer Science, MathematicsNIPS
- 2017

This paper analyzes the "gradient descent" form of GAN optimization i.e., the natural setting where the authors simultaneously take small gradient steps in both generator and discriminator parameters, and proposes an additional regularization term for gradient descent GAN updates that is able to guarantee local stability for both the WGAN and the traditional GAN.

Stabilizing Adversarial Nets With Prediction Methods

- Computer Science, MathematicsICLR
- 2018

It is shown, both in theory and practice, that the proposed method reliably converges to saddle points, and is stable with a wider range of training parameters than a non-prediction method, which makes adversarial networks less likely to "collapse," and enables faster training with larger learning rates.

NIPS 2016 Tutorial: Generative Adversarial Networks

- Computer ScienceArXiv
- 2017

This report summarizes the tutorial presented by the author at NIPS 2016 on generative adversarial networks (GANs), and describes state-of-the-art image models that combine GANs with other methods.

Unrolled Generative Adversarial Networks

- Computer Science, MathematicsICLR
- 2017

This work introduces a method to stabilize Generative Adversarial Networks by defining the generator objective with respect to an unrolled optimization of the discriminator, and shows how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.

Adversarial Divergences are Good Task Losses for Generative Modeling

- Computer Science, MathematicsICLR
- 2018

It is argued that adversarial learning, pioneered with generative adversarial networks (GANs), provides an interesting framework to implicitly define more meaningful task losses for generative modeling tasks, such as for generating "visually realistic" images.