# On the capacity of deep generative networks for approximating distributions

@article{Yang2022OnTC,
title={On the capacity of deep generative networks for approximating distributions},
author={Yunfei Yang and Zhen Li and Yang Wang},
journal={Neural networks : the official journal of the International Neural Network Society},
year={2022},
volume={145},
pages={
144-154
}
}
• Published 29 January 2021
• Computer Science
• Neural networks : the official journal of the International Neural Network Society

## Figures from this paper

Approximation for Probability Distributions by Wasserstein GAN
• Computer Science
ArXiv
• 2021
This paper studies Wasserstein Generative Adversarial Networks using GroupSort neural networks as discriminators to develop a generalization error bound, which is free from curse of dimensionality w.r.t. numbers of training data.
An error analysis of generative adversarial networks for learning distributions
• Computer Science
ArXiv
• 2021
The main results establish the convergence rates of GANs under a collection of integral probability metrics deﬁned through H¨older classes, including the Wasserstein distance as a special case.
An Error Analysis of Generative Adversarial Networks for Learning Distributions
The main results establish the convergence rates of GANs under a collection of integral probability metrics deﬁned through H¨older classes, including the Wasserstein distance as a special case.
Supplementary Material for “Non-Asymptotic Error Bounds for Bidirectional GANs”
We use σ to denote the ReLU activation function in neural networks, which is σ(x) = max{x, 0}. Without further indication, ‖·‖ represents theL2 norm. For any function g, let ‖g‖∞ = supx ‖g(x)‖. We
Deep Generative Survival Analysis: Nonparametric Estimation of Conditional Survival Function
• Mathematics
• 2022
We propose a deep generative approach to nonparametric estimation of conditional survival and hazard functions with right-censored data. The key idea of the proposed method is to ﬁrst learn a
Approximation bounds for norm constrained neural networks with applications to regression and GANs
• Computer Science
ArXiv
• 2022
Upper and lower bounds on the approximation error of these networks for smooth function classes are proved and it is shown that GANs can achieve optimal rate of learning probability distributions, when the discriminator is a properly chosen norm constrained neural network.
Wasserstein Generative Learning of Conditional Distribution
• Mathematics
ArXiv
• 2021
This work establishes non-asymptotic error bound of the conditional sampling distribution generated by the proposed method and shows that it is able to mitigate the curse of dimensionality, assuming that the data distribution is supported on a lower-dimensional set.
Non-Asymptotic Error Bounds for Bidirectional GANs
• Computer Science
NeurIPS
• 2021
We derive nearly sharp bounds for the bidirectional GAN (BiGAN) estimation error under the Dudley distance between the latent joint distribution and the data joint distribution with appropriately

## References

SHOWING 1-10 OF 64 REFERENCES
Constructive Universal High-Dimensional Distribution Generation through Deep ReLU Networks
• Computer Science
ICML
• 2020
We present an explicit deep neural network construction that transforms uniformly distributed one-dimensional noise into an arbitrarily close approximation of any two-dimensional Lipschitz-continuous
On the Ability of Neural Nets to Express Distributions
• Computer Science
COLT
• 2017
This work takes a first cut at explaining the expressivity of multilayer nets by giving a sufficient criterion for a function to be approximable by a neural network with n hidden layers.
Optimal approximation of continuous functions by very deep ReLU networks
It is proved that constant-width fully-connected networks of depth $L\sim W$ provide the fastest possible approximation rate $\|f-\widetilde f\|_\infty = O(\omega_f(O(W^{-2/\nu})))$ that cannot be achieved with less deep networks.
• Computer Science
NeurIPS
• 2018
A construction that allows ReLU networks to increase the dimensionality of their noise distribution by implementing a "space-filling" function based on iterated tent maps is demonstrated and it is indicated how high dimensional distributions can be efficiently transformed into low dimensional distributions.
A Universal Approximation Theorem of Deep Neural Networks for Expressing Probability Distributions
• Computer Science
NeurIPS
• 2020
Upper bounds for the size (width and depth) of the deep neural network in terms of the dimension $d$ and the approximation error $\varepsilon$ with respect to the three discrepancies are proved.
Auto-Encoding Variational Bayes
• Computer Science
ICLR
• 2014
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
• Computer Science
NIPS
• 2014
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization
• Computer Science
NIPS
• 2016
It is shown that any f-divergence can be used for training generative neural samplers and the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models are discussed.
Nonlinear Approximation and (Deep) ReLU Networks
• Computer Science
ArXiv
• 2019
The main results of this article prove that neural networks possess even greater approximation power than these traditional methods of nonlinear approximation, and exhibiting large classes of functions which can be efficiently captured by neural networks where classical nonlinear methods fall short of the task.