Relaxed Wasserstein with Applications to GANs

@article{Guo2021RelaxedWW,
  title={Relaxed Wasserstein with Applications to GANs},
  author={Xin Guo and Johnny Hong and Tianyi Lin and Nan Yang},
  journal={ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  year={2021},
  pages={3325-3329}
}
  • Xin Guo, Johnny Hong, Nan Yang
  • Published 19 May 2017
  • Computer Science
  • ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Wasserstein Generative Adversarial Networks (WGANs) provide a versatile class of models, which have attracted great attention in various applications. However, this framework has two main drawbacks: (i) Wasserstein-1 (or Earth-Mover) distance is restrictive such that WGANs cannot always fit data geometry well; (ii) It is difficult to achieve fast training of WGANs. In this paper, we propose a new class of Relaxed Wasserstein (RW) distances by generalizing Wasserstein-1 distance with Bregman… 

Figures from this paper

A Two-Step Computation of the Exact GAN Wasserstein Distance
TLDR
This approach optimizes the exact Wasserstein distance, obviating the need for weight clipping previously used in WGANs, and theoretically proves that the proposed formulation is equivalent to the discrete MongeKantorovich dual formulation.
Wasserstein GAN With Quadratic Transport Cost
TLDR
Qualitative and quantitative results on the CelebA, CelebA-HQ, LSUN and the ImageNet dog datasets show that WGAN-QC is better than state-of-art GAN methods, and has much faster runtime than other WGAN variants.
How Generative Adversarial Networks and Their Variants Work
TLDR
How GANs operates and the fundamental meaning of various objective functions that have been suggested recently are explained and how the GAN can be combined with an autoencoder framework is focused on.
Understanding GANs: the LQG Setting
TLDR
This paper proposes a natural way of specifying the loss function for GANs by drawing a connection with supervised learning and sheds light on the statistical performance of GAN's through the analysis of a simple LQG setting: the generator is linear, the lossfunction is quadratic and the data is drawn from a Gaussian distribution.
Understanding GANs in the LQG Setting: Formulation, Generalization and Stability
TLDR
This paper provides an understanding of basic issues surrounding GANs including their formulation, generalization and stability on a simple LQG benchmark where the generator is Linear, the discriminator is Quadratic and the data has a high-dimensional Gaussian distribution.
Generative Adversarial Networks (GANs)
TLDR
This study performs a comprehensive survey of the advancements in GANs design and optimization solutions and proposes a new taxonomy to structure solutions by key research issues and presents promising research directions in this rapidly growing field.
Connecting GANs, mean-field games, and optimal transport
TLDR
This paper analyzes GANs from the perspectives of mean-field games (MFGs) and optimal transport and shows superior performance of this proposed algorithm, especially in the higher dimensional case, when compared with existing neural network approaches.
Connecting GANs and MFGs
TLDR
Interpreting MFGs as GANs, on the other hand, provides a new and probabilistic foundation for GANS, and helps establish an analytical connection between GANS and Optimal Transport (OT) problems.
Challenges and Corresponding Solutions of Generative Adversarial Networks (GANs): A Survey Study
TLDR
A comprehensive investigation on the progress of GANs design and optimization solutions is conducted and according to the classification method, a problem-solving structure is provided to solve conquer the GAns training challenges.
Normalized Wasserstein Distance for Mixture Distributions with Applications in Adversarial Learning and Domain Adaptation
TLDR
The key idea is to introduce mixture proportions as optimization variables, effectively normalizing mixture proportions in the Wasserstein formulation, and this measure is demonstrated to be effective in GANs, domain adaptation and adversarial clustering in several benchmark datasets.
...
1
2
3
4
...

References

SHOWING 1-10 OF 90 REFERENCES
Improved Training of Wasserstein GANs
TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.
GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium
TLDR
This work proposes a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions and introduces the "Frechet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score.
Least Squares Generative Adversarial Networks
TLDR
This paper proposes the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator, and shows that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence.
Wasserstein Generative Adversarial Networks
TLDR
This work introduces a new algorithm named WGAN, an alternative to traditional GAN training that can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches.
How to Train Your DRAGAN
TLDR
This paper introduces regret minimization as a technique to reach equilibrium in games and uses this to justify the success of simultaneous GD in GANs and develops an algorithm called DRAGAN that is fast, simple to implement and achieves competitive performance in a stable fashion.
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.
BEGAN: Boundary Equilibrium Generative Adversarial Networks
TLDR
This work proposes a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks, which provides a new approximate convergence measure, fast and stable training and high visual quality.
Generative Adversarial Networks: An Overview
TLDR
The aim of this review article is to provide an overview of GANs for the signal processing community, drawing on familiar analogies and concepts where possible, and point to remaining challenges in their theory and application.
Training GANs with Optimism
TLDR
This work addresses the issue of limit cycling behavior in training Generative Adversarial Networks and proposes the use of Optimistic Mirror Decent (OMD) for training Wasserstein GANs and introduces a new algorithm, Optimistic Adam, which is an optimistic variant of Adam.
Energy-based Generative Adversarial Network
We introduce the "Energy-based Generative Adversarial Network" model (EBGAN) which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and
...
1
2
3
4
5
...