Some Theoretical Properties of GANs

@article{Biau2018SomeTP,
  title={Some Theoretical Properties of GANs},
  author={G{\'e}rard Biau and Beno{\^i}t Cadre and Maxime Sangnier and Ugo Tanielian},
  journal={ArXiv},
  year={2018},
  volume={abs/1803.07819}
}
Generative Adversarial Networks (GANs) are a class of generative algorithms that have been shown to produce state-of-the art samples, especially in the domain of image creation. The fundamental principle of GANs is to approximate the unknown distribution of a given data set by optimizing an objective function through an adversarial game between a family of generators and a family of discriminators. In this paper, we offer a better theoretical understanding of GANs by analyzing some of their… Expand
The Inductive Bias of Restricted f-GANs
TLDR
This work provides a theoretical characterization of the distribution inferred by a simple form of generative adversarial learning called restricted f-GANs -- where the discriminator is a function in a given function class, the distribution induced by the generator is restricted to lie in a pre-specified distribution class and the objective is similar to a variational form of the f-divergence. Expand
Rates of convergence for density estimation with generative adversarial networks
Abstract: In this work we undertake a thorough study of the non-asymptotic properties of the vanilla generative adversarial networks (GANs). We derive theoretical guarantees for the densityExpand
Statistical Regeneration Guarantees of the Wasserstein Autoencoder with Latent Space Consistency
TLDR
This paper provides statistical guarantees that WAE achieves the target distribution in the latent space, utilizing the Vapnik–Chervonenkis (VC) theory, and hints at the class of distributions WAE can reconstruct after suffering a compression in the form of a latent law. Expand
Robustness of Conditional GANs to Noisy Labels
TLDR
The main idea is to corrupt the label of the generated sample before feeding to the adversarial discriminator, forcing the generator to produce samples with clean labels, and the proposed approach is robust, when used with a carefully chosen discriminator architecture. Expand
MaGNET: Uniform Sampling from Deep Generative Network Manifolds Without Retraining
TLDR
This work develops a differential geometry based sampler -coined MaGNET that, given any trained DGN, produces samples that are uniformly distributed on the learned manifold, and proves theoretically and empirically that this technique produces a uniform distribution on the manifold regardless of the training set distribution. Expand
Time Series (re)sampling using Generative Adversarial Networks
TLDR
It is found that temporal convolutional neural networks provide a suitable design for the generator and discriminator, and that convincing samples can be generated on the basis of a vector of iid normal noise. Expand
Statistical guarantees for generative models without domination
In this paper, we introduce a convenient framework for studying (adversarial) generative models from a statistical perspective. It consists in modeling the generative device as a smoothExpand
Anomaly detection with Wasserstein GAN
TLDR
W-GAN with encoder seems to produce state of the art anomaly detection scores on MNIST dataset and its usage on multi-variate time series is investigated. Expand
Max-Affine Spline Insights into Deep Generative Networks
TLDR
It is demonstrated how low entropy and/or multimodal distributions are not naturally modeled by DGNs and are a cause of training instabilities. Expand
Beyond $\mathcal{H}$-Divergence: Domain Adaptation Theory With Jensen-Shannon Divergence
TLDR
A new theoretical framework is established by directly proving the upper and lower target risk bounds based on joint distributional Jensen-Shannon divergence, which enables a generic guideline unifying principles of semantic conditional matching, feature marginal matching, and label marginal shift correction. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 21 REFERENCES
Optimizing the Latent Space of Generative Networks
TLDR
Generative Latent Optimization (GLO), a framework to train deep convolutional generators using simple reconstruction losses, and enjoys many of the desirable properties of GANs: synthesizing visually-appealing samples, interpolating meaningfully between samples, and performing linear arithmetic with noise vectors; all of this without the adversarial optimization scheme. Expand
Approximation and Convergence Properties of Generative Adversarial Learning
TLDR
It is shown that if the objective function is an adversarial divergence with some additional conditions, then using a restricted discriminator family has a moment-matching effect, thus generalizing previous results. Expand
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes. Expand
Training generative neural networks via Maximum Mean Discrepancy optimization
TLDR
This work considers training a deep neural network to generate samples from an unknown distribution given i.i.d. data to frame learning as an optimization minimizing a two-sample test statistic, and proves bounds on the generalization error incurred by optimizing the empirical MMD. Expand
On the Discrimination-Generalization Tradeoff in GANs
TLDR
This paper shows that a discriminator set is guaranteed to be discriminative whenever its linear span is dense in the set of bounded continuous functions, and develops generalization bounds between the learned distribution and true distribution under different evaluation metrics. Expand
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aExpand
Semantically Decomposing the Latent Spaces of Generative Adversarial Networks
TLDR
A new algorithm for training generative adversarial networks that jointly learns latent codes for both identities and observations that can generate diverse images of the same subject and traverse the manifold of subjects while maintaining contingent aspects such as lighting and pose. Expand
Towards Principled Methods for Training Generative Adversarial Networks
TLDR
The goal of this paper is to make theoretical steps towards fully understanding the training dynamics of generative adversarial networks, and performs targeted experiments to substantiate the theoretical analysis and verify assumptions, illustrate claims, and quantify the phenomena. Expand
Generative networks as inverse problems with Scattering transforms
TLDR
Deep convolutional network generators are computed by inverting a fixed embedding operator and demonstrating that they have similar properties as GANs or VAEs, without learning a discriminative network or an encoder. Expand
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization
TLDR
It is shown that any f-divergence can be used for training generative neural samplers and the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models are discussed. Expand
...
1
2
3
...