• Corpus ID: 245837785

Optimal 1-Wasserstein Distance for WGANs

@article{Stephanovitch2022Optimal1D,
  title={Optimal 1-Wasserstein Distance for WGANs},
  author={Arthur St'ephanovitch and Ugo Tanielian and Beno{\^i}t Cadre and N. Klutchnikoff and G{\'e}rard Biau},
  journal={ArXiv},
  year={2022},
  volume={abs/2201.02824}
}
The mathematical forces at work behind Generative Adversarial Networks raise challenging theoretical issues. Motivated by the important question of characterizing the geometrical properties of the generated distributions, we provide a thorough analysis of Wasserstein GANs (WGANs) in both the finite sample and asymptotic regimes. We study the specific case where the latent space is univariate and derive results valid regardless of the dimension of the output space. We show in particular that for… 

Figures from this paper

References

SHOWING 1-10 OF 35 REFERENCES

Some Theoretical Insights into Wasserstein GANs

TLDR
The architecture of WGANs is properly defined in the context of integral probability metrics parameterized by neural networks and some of their basic mathematical features are highlighted, and interesting optimization properties arising from the use of a parametric 1-Lipschitz discriminator are stressed.

On How Well Generative Adversarial Networks Learn Densities: Nonparametric and Parametric Results

TLDR
The rate of convergence for learning distributions with the adversarial framework and Generative Adversarial Networks (GANs), which subsumes Wasserstein, Sobolev and MMD GANs as special cases are studied.

Generalization Properties of Optimal Transport GANs with Latent Distribution Learning

TLDR
This work studies how the interplay between the latent distribution and the complexity of the pushforward map (generator) affects performance, from both statistical and modelling perspectives, and proves that this can lead to significant advantages in terms of sample complexity.

Nonparametric Density Estimation & Convergence Rates for GANs under Besov IPM Losses

TLDR
It is shown how the results imply bounds on the statistical error of a GAN, showing, for example, that, in many cases, GANs can strictly outperform the best linear estimator.

Improved Training of Wasserstein GANs

TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.

Nonparametric Density Estimation under Adversarial Losses

TLDR
This work studies minimax convergence rates of nonparametric density estimation under a large class of loss functions called "adversarial losses", which includes maximum mean discrepancy, Wasserstein distance, and total variation distance.

Generative Adversarial Nets

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a

Which Training Methods for GANs do actually Converge?

TLDR
This paper describes a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is not always convergent, and extends convergence results to more general GANs and proves local convergence for simplified gradient penalties even if the generator and data distribution lie on lower dimensional manifolds.

Wasserstein Generative Adversarial Networks

TLDR
This work introduces a new algorithm named WGAN, an alternative to traditional GAN training that can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches.

On Convergence and Stability of GANs

TLDR
This work proposes studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions, and shows that DRAGAN enables faster training, achieves improved stability with fewer mode collapses, and leads to generator networks with better modeling performance across a variety of architectures and objective functions.