• Corpus ID: 246867288

GAN Estimation of Lipschitz Optimal Transport Maps

@article{GonzalezSanz2022GANEO,
  title={GAN Estimation of Lipschitz Optimal Transport Maps},
  author={Alberto Gonz'alez-Sanz and Lucas de Lara and Louis B'ethune and Jean-Michel Loubes},
  journal={ArXiv},
  year={2022},
  volume={abs/2202.07965}
}
This paper introduces the first statistically consistent estimator of the optimal transport map between two probability distributions, based on neural networks. Building on theoretical and practical advances in the field of Lipschitz neural networks, we define a Lipschitz-constrained generative adversarial network penalized by the quadratic transportation cost. Then, we demonstrate that, under regularity assumptions, the obtained generator converges uniformly to the optimal transport map as the… 

Figures from this paper

Kernel Neural Optimal Transport

Kernel weak quadratic costs are introduced in the Neural Optimal Transport algorithm which uses the general optimal transport formulation and learns stochastic transport plans and provide improved theoretical guarantees and practical performance.

Nonparametric Multiple-Output Center-Outward Quantile Regression

: Based on the novel concept of multivariate center-outward quantiles intro- duced recently in Chernozhukov et al. (2017) and Hallin et al. (2021), we are considering the problem of nonparametric

References

SHOWING 1-10 OF 40 REFERENCES

Adversarial Computation of Optimal Transport Maps

This work proposes a generative adversarial model in which the discriminator's objective is the $2-Wasserstein metric, and shows that during training, the generator follows the $W_2$-geodesic between the initial and the target distributions, and reproduces an optimal map at the end of training.

Optimal transport mapping via input convex neural networks

This approach ensures that the transport mapping the authors find is optimal independent of how they initialize the neural networks, as gradient of a convex function naturally models a discontinuous transport mapping.

Sorting out Lipschitz function approximation

This work identifies a necessary property for such an architecture: each of the layers must preserve the gradient norm during backpropagation, and proposes to combine a gradient norm preserving activation function, GroupSort, with norm-constrained weight matrices that are universal Lipschitz function approximators.

Large Scale Optimal Transport and Mapping Estimation

This paper proposes a stochastic dual approach of regularized OT, and shows empirically that it scales better than a recent related approach when the amount of samples is very large, and estimates a Monge map as a deep neural network learned by approximating the barycentric projection of the previously-obtained OT plan.

Plugin Estimation of Smooth Optimal Transport Maps

A central limit theorem is derived for a density plugin estimator of the squared Wasserstein distance, which is centered at its population counterpart when the underlying distributions have sufficiently smooth densities.

Approximating Lipschitz continuous functions with GroupSort neural networks

It is proved that the recently introduced GroupSort neural networks, with constraints on the weights, are well-suited for approximating Lipschitz continuous functions and exhibit upper bounds on both the depth and size.

Entropic estimation of optimal transport maps

A computationally tractable method for estimating the optimal map between two distributions over R d with rigorous Monte Carlo guarantees and comparable statistical performance to other estimators in the literature, but with much lower computational cost is developed.

The Many Faces of 1-Lipschitz Neural Networks

It is demonstrated that, despite being empirically harder to train, 1-Lipschitz neural networks are theoretically better grounded than unconstrained ones when it comes to classification.

Wasserstein-2 Generative Networks

This paper proposes a novel end-to-end algorithm for training generative models which uses a non-minimax objective simplifying model training and uses the approximation of Wasserstein-2 distance by Input Convex Neural Networks.

LARGE SCALE OPTIMAL TRANSPORT

An implicit generative learning-based framework called SPOT (Scalable Push-forward of Optimal Transport) is proposed, which approximate the optimal transport plan by a pushforward of a reference distribution, and cast the optimal Transport problem into a minimax problem.