• Corpus ID: 88514428

GAN and VAE from an Optimal Transport Point of View

@article{Genevay2017GANAV,
  title={GAN and VAE from an Optimal Transport Point of View},
  author={Aude Genevay and Gabriel Peyr'e and Marco Cuturi},
  journal={arXiv: Machine Learning},
  year={2017}
}
This short article revisits some of the ideas introduced in arXiv:1701.07875 and arXiv:1705.07642 in a simple setup. This sheds some lights on the connexions between Variational Autoencoders (VAE), Generative Adversarial Networks (GAN) and Minimum Kantorovitch Estimators (MKE). 

Figures from this paper

Improving GANs Using Optimal Transport
TLDR
Optimal Transport GAN (OT-GAN), a variant of generative adversarial nets minimizing a new metric measuring the distance between the generator distribution and the data distribution, resulting in a highly discriminative distance function with unbiased mini-batch gradients is presented.
k-GANs: Ensemble of Generative Models with Semi-Discrete Optimal Transport
TLDR
A principled method for training an ensemble of GANs using semi-discrete optimal transport theory and the resulting k-GANs algorithm has strong theoretical connection with the k-medoids algorithm.
Kantorovich Strikes Back! Wasserstein GANs are not Optimal Transport?
TLDR
A generic methodology based on the transport rays (M3.1) is developed to evaluate dual OT solvers for the Wasserstein-1 distance (W1) and 1-Lipschitz functions are constructed and used to build ray monotone transport plans.
Approximation and convergence of GANs training: an SDE approach
TLDR
This paper establishes approximations, with precise error bound analysis, for the training of GANs under stochastic gradient algorithms (SGAs) in the form of coupled Stochastic differential equations (SDEs).
Convergence of Non-Convex Non-Concave GANs Using Sinkhorn Divergence
TLDR
This work proposes a first order sequential stochastic gradient descent ascent (SeqSGDA) algorithm and supplies a non-asymptotic analysis of the algorithm’s convergence rate.
A Gradual, Semi-Discrete Approach to Generative Network Training via Explicit Wasserstein Minimization
This paper provides a simple procedure to fit generative networks to target distributions, with the goal of a small Wasserstein distance (or other optimal transport costs). The approach is based on
Wasserstein Iterative Networks for Barycenter Estimation
TLDR
This paper presents an algorithm to approximate the Wasserstein-2 barycenters of continuous measures via a generative model and constructs Ave, celeba! dataset which can be used for quantitative evaluation of barycenter algorithms by using standard metrics of generative models such as FID.
Generative Models for Physicists
TLDR
The concept and principles of generative modeling are introduced, together with applications of modern generative models (autoregressive models, normalizing flows, variational autoencoders etc) as well as the old ones (Boltzmann machines) to physics problems.
Sliced-Wasserstein Flows: Nonparametric Generative Modeling via Optimal Transport and Diffusions
TLDR
This study proposes a novel parameter-free algorithm for learning the underlying distributions of complicated datasets and sampling from them based on a functional optimization problem, which aims at finding a measure that is close to the data distribution as much as possible and also expressive enough for generative modeling purposes.
A CHAIN GRAPH INTERPRETATION
  • Computer Science
  • 2020
TLDR
This paper proposes an alternative interpretation of neural networks as chain graphs (CGs) and feed-forward as an approximate inference procedure and demonstrates with concrete examples that the CG interpretation can provide novel theoretical support and insights for various NN techniques, as well as derive new deep learning approaches such as the concept of partially collapsedfeed-forward inference.
...
...

References

SHOWING 1-10 OF 13 REFERENCES
From optimal transport to generative modeling: the VEGAN cookbook
TLDR
It is shown that POT for the 2-Wasserstein distance coincides with the objective heuristically employed in adversarial auto-encoders (AAE) (Makhzani et al., 2016), which provides the first theoretical justification for AAEs known to the authors.
Sinkhorn-AutoDiff: Tractable Wasserstein Learning of Generative Models
TLDR
This paper presents the first tractable computational method to train large scale generative models using an optimal transport loss, and relies on two key ideas: entropic smoothing, which turns the original OT loss into one that can be computed using Sinkhorn fixed point iterations; and algorithmic (automatic) differentiation of these iterations, which result in a robust and differentiable approximation of the OT loss.
Auto-Encoding Variational Bayes
TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Wasserstein Training of Restricted Boltzmann Machines
TLDR
This work proposes a novel approach for Boltzmann machine training which assumes that a meaningful metric between observations is known, and derives a gradient of that distance with respect to the model parameters from the Kullback-Leibler divergence.
On minimum Kantorovich distance estimators
Inference in generative models using the Wasserstein distance
TLDR
This work uses Wasserstein distances between empirical distributions of observed data and empirical distribution of synthetic data drawn from such models to estimate their parameters, and proposes an alternative distance using the Hilbert space-filling curve.
Optimal Transport for Applied Mathematicians: Calculus of Variations, PDEs, and Modeling
Preface.- Primal and Dual Problems.- One-Dimensional Issues.- L^1 and L^infinity Theory.- Minimal Flows.- Wasserstein Spaces.- Numerical Methods.- Functionals over Probabilities.- Gradient Flows.-
On parameter estimation with the Wasserstein distance
TLDR
These results cover the misspecified setting, in which the data-generating process is not assumed to be part of the family of distributions described by the model, and some difficulties arising in the numerical approximation of these estimators are discussed.
Scaling Algorithms for Unbalanced Transport Problems
TLDR
This article introduces a new class of fast algorithms to approx-imate variational problems involving unbalanced optimal transport, and shows how these methods can be used to solve unbalanced transport, unbalanced gradient flows, and to compute unbalanced barycenters.
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
...
...