# How Well Generative Adversarial Networks Learn Distributions

@article{Liang2021HowWG, title={How Well Generative Adversarial Networks Learn Distributions}, author={Tengyuan Liang}, journal={J. Mach. Learn. Res.}, year={2021}, volume={22}, pages={228:1-228:41} }

This paper studies the rates of convergence for learning distributions implicitly with the adversarial framework and Generative Adversarial Networks (GAN), which subsume Wasserstein, Sobolev, MMD GAN, and Generalized/Simulated Method of Moments (GMM/SMM) as special cases. We study a wide range of parametric and nonparametric target distributions, under a host of objective evaluation metrics. We investigate how to obtain a good statistical guarantee for GANs through the lens of regularization…

## 26 Citations

An error analysis of generative adversarial networks for learning distributions

- Computer ScienceArXiv
- 2021

The main results establish the convergence rates of GANs under a collection of integral probability metrics deﬁned through H¨older classes, including the Wasserstein distance as a special case.

Understanding Estimation and Generalization Error of Generative Adversarial Networks

- Computer ScienceIEEE Transactions on Information Theory
- 2021

An upper bound as well as a minimax lower bound on the estimation error for training GANs are developed, which justifies the generalization ability of the GAN training via SGM after multiple passes over the data and reflects the interplay between the discriminator and the generator.

Wasserstein Generative Adversarial Uncertainty Quantification in Physics-Informed Neural Networks

- Computer ScienceJournal of Computational Physics
- 2022

Approximation for Probability Distributions by Wasserstein GAN

- Computer ScienceArXiv
- 2021

This paper studies Wasserstein Generative Adversarial Networks using GroupSort neural networks as discriminators to develop a generalization error bound, which is free from curse of dimensionality w.r.t. numbers of training data.

A likelihood approach to nonparametric estimation of a singular distribution using deep generative models

- Computer Science, MathematicsArXiv
- 2021

This work proves that a novel and effective solution exists by perturbing the data with an instance noise which leads to consistent estimation of the underlying distribution with desirable convergence rates, and characterize the class of distributions that can be efficiently estimated via deep generative models.

On the Convergence of Gradient Descent in GANs: MMD GAN As a Gradient Flow

- Computer Science, MathematicsAISTATS
- 2021

A parametric kernelized gradient flow that mimics the min-max game in gradient regularized MMD GAN and provides a descent direction minimizing the $\mathrm{MMD}$ on a statistical manifold of probability distributions is proposed.

Realizing GANs via a Tunable Loss Function

- Computer Science2021 IEEE Information Theory Workshop (ITW)
- 2021

We introduce a tunable GAN, called $\alpha$-GAN, parameterized by $\alpha\in$(0, $\infty$], which interpolates between various f-GANs and Integral Probability Metric based GANs (under constrained…

Deep Dimension Reduction for Supervised Representation Learning

- Computer ScienceArXiv
- 2020

This work proposes a deep dimension reduction (DDR) approach to achieving a good data representation with these characteristics for supervised learning, and formulate the ideal representation learning task as finding a nonlinear dimension reduction map that minimizes the sum of losses characterizing conditional independence and disentanglement.

Rates of convergence for nonparametric estimation of singular distributions using generative adversarial networks

- Computer Science
- 2022

The convergence rate of a GAN type estimator with respect to the Wasserstein metric is found to be faster than that obtained by likelihood approaches, which provides insights into why GAN approaches perform better in many real problems.

Rates of convergence for density estimation with generative adversarial networks

- Computer Science
- 2021

It is proved that the resulting estimate converges to the true density p in terms of Jensen-Shannon (JS) divergence at the rate (log n/n)2β/(2β+d) where n is the sample size and β determines the smoothness of p.

## References

SHOWING 1-10 OF 56 REFERENCES

How Well Can Generative Adversarial Networks (GAN) Learn Densities: A Nonparametric View

- Computer ScienceArXiv
- 2017

An improved GAN estimator is introduced that achieves a faster rate, through leveraging the level of smoothness in the target density and the evaluation metric, which in theory remedies the mode collapse problem reported in the literature.

Approximability of Discriminators Implies Diversity in GANs

- Computer ScienceICLR
- 2019

It is shown in this paper that GANs can in principle learn distributions in Wasserstein distance with polynomial sample complexity, if the discriminator class has strong distinguishing power against the particular generator class (instead of against all possible generators).

Generalization and equilibrium in generative adversarial nets (GANs) (invited talk)

- Computer ScienceICML
- 2017

Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good…

Training generative neural networks via Maximum Mean Discrepancy optimization

- Computer Science, MathematicsUAI
- 2015

This work considers training a deep neural network to generate samples from an unknown distribution given i.i.d. data to frame learning as an optimization minimizing a two-sample test statistic, and proves bounds on the generalization error incurred by optimizing the empirical MMD.

The Inductive Bias of Restricted f-GANs

- Computer ScienceArXiv
- 2018

This work provides a theoretical characterization of the distribution inferred by a simple form of generative adversarial learning called restricted f-GANs -- where the discriminator is a function in a given function class, the distribution induced by the generator is restricted to lie in a pre-specified distribution class and the objective is similar to a variational form of the f-divergence.

Nonparametric Density Estimation under Adversarial Losses

- Computer ScienceNeurIPS
- 2018

This work studies minimax convergence rates of nonparametric density estimation under a large class of loss functions called "adversarial losses", which includes maximum mean discrepancy, Wasserstein distance, and total variation distance.

Approximation and Convergence Properties of Generative Adversarial Learning

- Computer ScienceNIPS
- 2017

It is shown that if the objective function is an adversarial divergence with some additional conditions, then using a restricted discriminator family has a moment-matching effect, thus generalizing previous results.

Generative Adversarial Nets

- Computer ScienceNIPS
- 2014

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a…

Using Wasserstein Generative Adversarial Networks for the Design of Monte Carlo Simulations

- EconomicsJournal of Econometrics
- 2019

There is not one estimator that outperforms the others in all three settings, so researchers should tailor their analytic approach to a given setting, and systematic simulation studies can be helpful for selecting among competing methods in this situation.

Generative Moment Matching Networks

- Computer ScienceICML
- 2015

This work forms a method that generates an independent sample via a single feedforward pass through a multilayer perceptron, as in the recently proposed generative adversarial networks, using MMD to learn to generate codes that can then be decoded to produce samples.