Corpus ID: 203593595

Understanding and Stabilizing GANs' Training Dynamics with Control Theory

@article{Xu2020UnderstandingAS,
  title={Understanding and Stabilizing GANs' Training Dynamics with Control Theory},
  author={Kun Xu and Chongxuan Li and Huanshu Wei and J. Zhu and B. Zhang},
  journal={ArXiv},
  year={2020},
  volume={abs/1909.13188}
}
Generative adversarial networks (GANs) are effective in generating realistic images but the training is often unstable. There are existing efforts that model the training dynamics of GANs in the parameter space but the analysis cannot directly motivate practically effective stabilizing methods. To this end, we present a conceptually novel perspective from control theory to directly model the dynamics of GANs in the function space and provide simple yet effective methods to stabilize GANs… Expand
3 Citations
Generalization of GANs under Lipschitz continuity and data augmentation
  • PDF
Understanding Overparameterization in Generative Adversarial Networks
  • 1
  • PDF
Efficient Learning of Generative Models via Finite-Difference Score Matching
  • 4
  • PDF

References

SHOWING 1-10 OF 50 REFERENCES
Generative Adversarial Network Training is a Continual Learning Problem
  • 29
  • PDF
Understanding GANs: the LQG Setting
  • 48
  • PDF
Consistency Regularization for Generative Adversarial Networks
  • 58
  • Highly Influential
  • PDF
Composite Functional Gradient Learning of Generative Adversarial Models
  • 7
  • PDF
Improved Training of Wasserstein GANs
  • 4,074
  • Highly Influential
  • PDF
Gradient descent GAN optimization is locally stable
  • 217
  • PDF
Which Training Methods for GANs do actually Converge?
  • 486
  • Highly Influential
  • PDF
SGAN: An Alternative Training of Generative Adversarial Networks
  • 28
  • PDF
On the convergence properties of GAN training
  • 57
Unrolled Generative Adversarial Networks
  • 585
  • PDF
...
1
2
3
4
5
...