Corpus ID: 52176819

GANs beyond divergence minimization

@article{JolicoeurMartineau2018GANsBD,
  title={GANs beyond divergence minimization},
  author={A. Jolicoeur-Martineau},
  journal={ArXiv},
  year={2018},
  volume={abs/1809.02145}
}
Generative adversarial networks (GANs) can be interpreted as an adversarial game between two players, a discriminator D and a generator G, in which D learns to classify real from fake data and G learns to generate realistic data by "fooling" D into thinking that fake data is actually real data. Currently, a dominating view is that G actually learns by minimizing a divergence given that the general objective function is a divergence when D is optimal. However, this view has been challenged due… Expand

References

SHOWING 1-10 OF 34 REFERENCES
Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step
How to Train Your DRAGAN
Least Squares Generative Adversarial Networks
Improved Training of Wasserstein GANs
Fisher GAN
Generative Adversarial Nets
...
1
2
3
4
...