# Subadditivity of Probability Divergences on Bayes-Nets with Applications to Time Series GANs

@article{Ding2020SubadditivityOP, title={Subadditivity of Probability Divergences on Bayes-Nets with Applications to Time Series GANs}, author={Mucong Ding and Constantinos Daskalakis and Soheil Feizi}, journal={ArXiv}, year={2020}, volume={abs/2003.00652} }

GANs for time series data often use sliding windows or self-attention to capture underlying time dependencies. While these techniques have no clear theoretical justification, they are successful in significantly reducing the discriminator size, speeding up the training process, and improving the generation quality. In this paper, we provide both theoretical foundations and a practical framework of GANs for high-dimensional distributions with conditional independence structure captured by a…

## Figures, Tables, and Topics from this paper

## One Citation

Generative Ensemble-Regression: Learning Stochastic Dynamics from Discrete Particle Ensemble Observations

- Computer Science, MathematicsArXiv
- 2020

A new method for inferring the governing stochastic ordinary differential equations by observing particle ensembles at discrete and sparse time instants, i.e., multiple "snapshots" is proposed, in analogy to the classic "point-regression", where the dynamics are inferred by performing regression in the Euclidean space.

## References

SHOWING 1-10 OF 40 REFERENCES

On the Discrimination-Generalization Tradeoff in GANs

- Computer Science, MathematicsICLR
- 2018

This paper shows that a discriminator set is guaranteed to be discriminative whenever its linear span is dense in the set of bounded continuous functions, and develops generalization bounds between the learned distribution and true distribution under different evaluation metrics.

Time-series Generative Adversarial Networks

- Computer ScienceNeurIPS
- 2019

A novel framework for generating realistic time-series data that combines the flexibility of the unsupervised paradigm with the control afforded by supervised training is proposed, which consistently and significantly outperforms state-of-the-art benchmarks with respect to measures of similarity and predictive ability.

Breaking the Curse of Dimensionality with Convex Neural Networks

- Computer Science, MathematicsJ. Mach. Learn. Res.
- 2017

This work considers neural networks with a single hidden layer and non-decreasing homogeneous activa-tion functions like the rectified linear units and shows that they are adaptive to unknown underlying linear structures, such as the dependence on the projection of the input variables onto a low-dimensional subspace.

Visualizing Data using t-SNE

- Mathematics
- 2008

We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic…

A Study of Local Approximations in Information Theory

- Computer Science
- 2015

It is shown that large classes of statistical divergence measures, such as f-divergences and Bregman divergences, can be approximated in an analogous manner to local metrics which are very similar in form to capture the cost of making local approximations of KL divergence instead of using its global value.

Generative Adversarial Nets

- Computer ScienceNIPS
- 2014

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a…

Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs

- Mathematics, Computer ScienceArXiv
- 2017

This work proposes a Recurrent GAN (RGAN) and Recurrent Conditional GGAN (RCGAN) to produce realistic real-valued multi-dimensional time series, with an emphasis on their application to medical data.

Generative Moment Matching Networks

- Computer Science, MathematicsICML
- 2015

This work forms a method that generates an independent sample via a single feedforward pass through a multilayer perceptron, as in the recently proposed generative adversarial networks, using MMD to learn to generate codes that can then be decoded to produce samples.

On Divergences and Informations in Statistics and Information Theory

- Mathematics, Computer ScienceIEEE Transactions on Information Theory
- 2006

The paper deals with the f-divergences of Csiszar generalizing the discrimination information of Kullback, the total variation distance, the Hellinger divergence, and the Pearson divergence. All…

Improved Training of Wasserstein GANs

- Computer Science, MathematicsNIPS
- 2017

This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.