Corpus ID: 52176819

GANs beyond divergence minimization

@article{JolicoeurMartineau2018GANsBD,
  title={GANs beyond divergence minimization},
  author={A. Jolicoeur-Martineau},
  journal={ArXiv},
  year={2018},
  volume={abs/1809.02145}
}
Generative adversarial networks (GANs) can be interpreted as an adversarial game between two players, a discriminator D and a generator G, in which D learns to classify real from fake data and G learns to generate realistic data by "fooling" D into thinking that fake data is actually real data. Currently, a dominating view is that G actually learns by minimizing a divergence given that the general objective function is a divergence when D is optimal. However, this view has been challenged due… Expand
The relativistic discriminator: a key element missing from standard GAN
TLDR
It is shown that RGANs and RaGANs are significantly more stable and generate higher quality data samples than their non-relativistic counterparts, and Standard RaGAN with gradient penalty generate data of better quality than WGAN-GP while only requiring a single discriminator update per generator update. Expand
Stabilizing Generative Adversarial Networks: A Survey
TLDR
A comprehensive overview of the GAN training stabilization methods is provided, which discusses the advantages and disadvantages of each approach, offers a comparative summary, and concludes with a discussion of open problems. Expand
Stabilizing Generative Adversarial Network Training: A Survey
TLDR
This survey summarizes the approaches and methods employed for the purpose of stabilizing GAN training procedure and discusses the advantages and disadvantages of each of the methods, offering a comparative summary of the literature on stabilizing gan training procedure. Expand
MSG-GAN: Multi-Scale Gradient GAN for Stable Image Synthesis
TLDR
This work proposes the Multi-Scale Gradient Generative Adversarial Network (MSG-GAN), a simple but effective technique for addressing this problem which allows the flow of gradients from the discriminator to the generator at multiple scales. Expand
MSG-GAN: Multi-Scale Gradients GAN for more stable and synchronized multi-scale image synthesis
TLDR
This work proposes Multi-Scale Gradients Generative Adversarial Network (MSG-GAN), a simplistic but effective technique for addressing this problem of training instability by allowing the flow of gradients from the Discriminator to the Generator at multiple scales. Expand
On Relativistic f-Divergences
TLDR
It is proved that the objective function of the discriminator is a statistical divergence for any concave function $f$ with minimal properties and it is suggested that WGAN does not performs well primarily because of the weak metric, but rather because of regularization and the use of a relativistic discriminator. Expand
Solving inverse problems in stochastic models using deep neural networks and adversarial training
TLDR
This work uses the expressive power of neural networks to approximate the unknown distribution and use a discriminative neural network for computing the statistical discrepancies between the observed and simulated random processes. Expand
Adversarial Numerical Analysis for Inverse Problems
TLDR
This work introduces adversarial numerical analysis, which estimates the unknown distributions by minimizing the discrepancy of statistical properties between observed random process and simulated random process. Expand

References

SHOWING 1-10 OF 34 REFERENCES
Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step
TLDR
It is demonstrated that GANs are able to learn distributions in situations where the divergence minimization point of view predicts they would fail, and it is contributed to a growing body of evidence thatGAN training may be more usefully viewed as approaching Nash equilibria via trajectories that do not necessarily minimize a specific divergence at each step. Expand
How to Train Your DRAGAN
TLDR
This paper introduces regret minimization as a technique to reach equilibrium in games and uses this to justify the success of simultaneous GD in GANs and develops an algorithm called DRAGAN that is fast, simple to implement and achieves competitive performance in a stable fashion. Expand
Least Squares Generative Adversarial Networks
TLDR
This paper proposes the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator, and shows that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence. Expand
GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium
TLDR
This work proposes a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions and introduces the "Frechet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score. Expand
Improved Training of Wasserstein GANs
TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning. Expand
MMD GAN: Towards Deeper Understanding of Moment Matching Network
TLDR
In the evaluation on multiple benchmark datasets, including MNIST, CIFAR- 10, CelebA and LSUN, the performance of MMD-GAN significantly outperforms GMMN, and is competitive with other representative GAN works. Expand
Fisher GAN
TLDR
Fisher GAN is introduced that fits within the Integral Probability Metrics (IPM) framework for training GANs and allows for stable and time efficient training that does not compromise the capacity of the critic, and does not need data independent constraints such as weight clipping. Expand
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aExpand
Are GANs Created Equal? A Large-Scale Study
TLDR
A neutral, multi-faceted large-scale empirical study on state-of-the art models and evaluation measures finds that most models can reach similar scores with enough hyperparameter optimization and random restarts, suggesting that improvements can arise from a higher computational budget and tuning more than fundamental algorithmic changes. Expand
On Convergence and Stability of GANs
TLDR
This work proposes studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions, and shows that DRAGAN enables faster training, achieves improved stability with fewer mode collapses, and leads to generator networks with better modeling performance across a variety of architectures and objective functions. Expand
...
1
2
3
4
...