Corpus ID: 57759375

On Relativistic f-Divergences

@article{JolicoeurMartineau2020OnRF,
  title={On Relativistic f-Divergences},
  author={Alexia Jolicoeur-Martineau},
  journal={ArXiv},
  year={2020},
  volume={abs/1901.02474}
}
This paper provides a more rigorous look at Relativistic Generative Adversarial Networks (RGANs). We prove that the objective function of the discriminator is a statistical divergence for any concave function $f$ with minimal properties ($f(0)=0$, $f'(0) \neq 0$, $\sup_x f(x)>0$). We also devise a few variants of relativistic $f$-divergences. Wasserstein GAN was originally justified by the idea that the Wasserstein distance (WD) is most sensible because it is weak (i.e., it induces a weak… Expand
Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs
TLDR
The concept of maximum-margin classifiers (MMCs) is generalized to arbitrary norms and non-linear functions and it is hypothesize and confirmed experimentally that L^\infty$-norm penalties with Hinge loss produce better GANs than $L^2$- norm penalties (based on common evaluation metrics). Expand
Towards a Better Global Loss Landscape of GANs
TLDR
A global landscape analysis of the empirical loss of GANs is performed and it is proved that a class of separable-GAN has exponentially many bad basins, including the original JS-GAN, which are perceived as mode-collapse. Expand
The Benefits of Pairwise Discriminators for Adversarial Training
TLDR
This paper introduces a family of objectives by leveraging pairwise discriminators, and shows that only the generator needs to converge, and that the alignment, if achieved, would be preserved with any discriminator. Expand
CRD-CGAN: Category-Consistent and Relativistic Constraints for Diverse Text-to-Image Generation
  • Tao Hu, Chengjiang Long, Chunxia Xiao
  • Computer Science
  • ArXiv
  • 2021
TLDR
A category-consistent and relativistic diverse conditional GAN is proposed to synthesize K photo-realistic images simultaneously and the extensive experiments demonstrate superiority of the proposed method in comparison with state-of-the-art methods in terms of photorealistic and diversity of the generated synthetic images. Expand
Class Balancing GAN with a Classifier in the Loop
TLDR
This work introduces a novel theoretically motivated Class Balancing regularizer for training GANs that makes use of the knowledge from a pretrained classifier to ensure balanced learning of all the classes in the dataset. Expand
Implicit Pairs for Boosting Unpaired Image-to-Image Translation
TLDR
It is shown that injecting implicit pairs into unpaired sets strengthens the mapping between the two domains, improves the compatibility of their distributions, and leads to performance boosting of unsupervised techniques by over 14% across several measurements. Expand
Self-supervised GANs with Label Augmentation
TLDR
A novel selfsupervised GANs framework with label augmentation, i.e., augmenting the GAN labels (real or fake) with the self-supervised pseudo-labels that significantly outperforms competitive baselines on both generative modeling and representation learning across benchmark datasets. Expand
MidiPGAN: A Progressive GAN Approach to MIDI Generation
TLDR
This work utilizes the progressive approach towards GANs, and implements it to train on symbolic music data, and proposes a new way of downsampling fit for musical data. Expand
Generative Adversarial Networks for Image and Video Synthesis: Algorithms and Applications
TLDR
An overview of GANs with a special focus on algorithms and applications for visual synthesis is provided and several important techniques to stabilize GAN training are covered, which has a reputation for being notoriously difficult. Expand
Adversarial score matching and improved sampling for image generation
TLDR
This work proposes two improvements to DSM-ALS: 1) Consistent Annealed Sampling as a more stable alternative to Annealed Langevin Sampling, and 2) a hybrid training formulation, composed of both Denoising Score Matching and adversarial objectives. Expand
...
1
2
...

References

SHOWING 1-10 OF 25 REFERENCES
The relativistic discriminator: a key element missing from standard GAN
TLDR
It is shown that RGANs and RaGANs are significantly more stable and generate higher quality data samples than their non-relativistic counterparts, and Standard RaGAN with gradient penalty generate data of better quality than WGAN-GP while only requiring a single discriminator update per generator update. Expand
How Well Do WGANs Estimate the Wasserstein Metric?
TLDR
This work studies how well the methods, that are used in generative adversarial networks to approximate the Wasserstein metric, perform, and considers, in particular, the $c-transform formulation, which eliminates the need to enforce the constraints explicitly. Expand
GANs beyond divergence minimization
TLDR
This paper discusses of the properties associated with most loss functions for G (e.g., saturating/non-saturating f-GAN, LSGAN, WGAN, etc.), and shows that these loss functions are not divergences and do not have the same equilibrium as expected of Divergences. Expand
Adam: A Method for Stochastic Optimization
TLDR
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Expand
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aExpand
Least Squares Generative Adversarial Networks
TLDR
This paper proposes the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator, and shows that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence. Expand
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization
TLDR
It is shown that any f-divergence can be used for training generative neural samplers and the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models are discussed. Expand
A Class of Statistics with Asymptotically Normal Distribution
Let X 1 …, X n be n independent random vectors, X v = , and Φ(x 1 …, x m ) a function of m(≤n) vectors . A statistic of the form , where the sum ∑″ is extended over all permutations (α1 …, α m ) ofExpand
Learning Multiple Layers of Features from Tiny Images
TLDR
It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network. Expand
A Style-Based Generator Architecture for Generative Adversarial Networks
TLDR
An alternative generator architecture for generative adversarial networks is proposed, borrowing from style transfer literature, that improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. Expand
...
1
2
3
...