Training Generative Adversarial Networks with Adaptive Composite Gradient
@article{Qi2021TrainingGA, title={Training Generative Adversarial Networks with Adaptive Composite Gradient}, author={Huiqing Qi and Fang Li and Shengli Tan and Xiangyun Zhang}, journal={ArXiv}, year={2021}, volume={abs/2111.05508} }
The wide applications of Generative adversarial networks benefit from the successful training methods, guaranteeing that an object function converges to the local minima. Nevertheless, designing an efficient and competitive training method is still a challenging task due to the cyclic behaviors of some gradient-based ways and the expensive computational cost of these methods based on the Hessian matrix. This paper proposed the adaptive Composite Gradients (ACG) method, linearly convergent in…
Figures from this paper
One Citation
Cyclegan Network for Sheet Metal Welding Drawing Translation
- Materials Science, Computer ScienceSSRN Electronic Journal
- 2022
An automatic translation method for welded structural engineering drawings based on Cyclic Generative Adversarial Networks (CycleGAN), which meets the welding engineering precision standard and solves the main problem of low drawing recognition efficiency in the welding manufacturing process.
References
SHOWING 1-10 OF 58 REFERENCES
Training Generative Adversarial Networks by Solving Ordinary Differential Equations
- Computer ScienceNeurIPS
- 2020
This work hypothesise that instabilities in training GANs arise from the integration error in discretising the continuous dynamics, and experimentally verify that well-known ODE solvers can stabilise training - when combined with a regulariser that controls the Integration error.
Training GANs with predictive projection centripetal acceleration
- Computer Science
- 2020
This work proposes a novel predictive projection centripetal acceleration (PPCA) methods to alleviate the cyclic behaviors of generative adversarial networks.
The Mechanics of n-Player Differentiable Games
- Computer ScienceICML
- 2018
The key result is to decompose the second-order dynamics into two components, related to potential games, which reduce to gradient descent on an implicit function; the second relates to Hamiltonian games, a new class of games that obey a conservation law, akin to conservation laws in classical mechanical systems.
Stabilizing GAN Training with Multiple Random Projections
- Computer ScienceArXiv
- 2017
This work proposes training a single generator simultaneously against an array of discriminators, each of which looks at a different random low-dimensional projection of the data to satisfy all discriminators simultaneously.
Interaction Matters: A Note on Non-asymptotic Local Convergence of Generative Adversarial Networks
- Computer ScienceAISTATS
- 2019
A simple yet unified non-asymptotic local convergence theory for smooth two-player games, which subsumes several discrete-time gradient-based saddle point dynamics and reveals the surprising nature of the off-diagonal interaction term.
The Numerics of GANs
- Computer ScienceNIPS
- 2017
This paper analyzes the numerics of common algorithms for training Generative Adversarial Networks (GANs) and designs a new algorithm that overcomes some of these limitations and has better convergence properties.
Revisiting Stochastic Extragradient
- Computer ScienceAISTATS
- 2020
This work fixes a fundamental issue in the stochastic extragradient method by providing a new sampling strategy that is motivated by approximating implicit updates, and proves guarantees for solving variational inequality that go beyond existing settings.
Convergence of Gradient Methods on Bilinear Zero-Sum Games
- Computer ScienceICLR
- 2020
This work restricts to bilinear zero-sum games and gives a systematic analysis of popular gradient updates, for both simultaneous and alternating versions, offering formal evidence that alternating updates converge "better" than simultaneous ones.
How Generative Adversarial Networks and Their Variants Work
- Computer ScienceACM Comput. Surv.
- 2019
How GANs operates and the fundamental meaning of various objective functions that have been suggested recently are explained and how the GAN can be combined with an autoencoder framework is focused on.
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization
- Computer ScienceNIPS
- 2016
It is shown that any f-divergence can be used for training generative neural samplers and the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models are discussed.