# GAN Estimation of Lipschitz Optimal Transport Maps

@article{GonzalezSanz2022GANEO, title={GAN Estimation of Lipschitz Optimal Transport Maps}, author={Alberto Gonz'alez-Sanz and Lucas de Lara and Louis B'ethune and Jean-Michel Loubes}, journal={ArXiv}, year={2022}, volume={abs/2202.07965} }

This paper introduces the ﬁrst statistically consistent estimator of the optimal transport map between two probability distributions, based on neural networks. Building on theoretical and practical advances in the ﬁeld of Lipschitz neural networks, we deﬁne a Lipschitz-constrained generative adversarial network penalized by the quadratic transportation cost. Then, we demonstrate that, under regularity assumptions, the obtained generator converges uniformly to the optimal transport map as the…

## 2 Citations

### Kernel Neural Optimal Transport

- Computer Science, EconomicsArXiv
- 2022

Kernel weak quadratic costs are introduced in the Neural Optimal Transport algorithm which uses the general optimal transport formulation and learns stochastic transport plans and provide improved theoretical guarantees and practical performance.

### Nonparametric Multiple-Output Center-Outward Quantile Regression

- Mathematics
- 2022

: Based on the novel concept of multivariate center-outward quantiles intro- duced recently in Chernozhukov et al. (2017) and Hallin et al. (2021), we are considering the problem of nonparametric…

## References

SHOWING 1-10 OF 40 REFERENCES

### Adversarial Computation of Optimal Transport Maps

- Computer ScienceArXiv
- 2019

This work proposes a generative adversarial model in which the discriminator's objective is the $2-Wasserstein metric, and shows that during training, the generator follows the $W_2$-geodesic between the initial and the target distributions, and reproduces an optimal map at the end of training.

### Optimal transport mapping via input convex neural networks

- Computer ScienceICML
- 2020

This approach ensures that the transport mapping the authors find is optimal independent of how they initialize the neural networks, as gradient of a convex function naturally models a discontinuous transport mapping.

### Sorting out Lipschitz function approximation

- Computer ScienceICML
- 2019

This work identifies a necessary property for such an architecture: each of the layers must preserve the gradient norm during backpropagation, and proposes to combine a gradient norm preserving activation function, GroupSort, with norm-constrained weight matrices that are universal Lipschitz function approximators.

### Large Scale Optimal Transport and Mapping Estimation

- Computer Science, MathematicsICLR
- 2018

This paper proposes a stochastic dual approach of regularized OT, and shows empirically that it scales better than a recent related approach when the amount of samples is very large, and estimates a Monge map as a deep neural network learned by approximating the barycentric projection of the previously-obtained OT plan.

### Plugin Estimation of Smooth Optimal Transport Maps

- Mathematics, Computer Science
- 2021

A central limit theorem is derived for a density plugin estimator of the squared Wasserstein distance, which is centered at its population counterpart when the underlying distributions have sufficiently smooth densities.

### Approximating Lipschitz continuous functions with GroupSort neural networks

- Computer ScienceAISTATS
- 2021

It is proved that the recently introduced GroupSort neural networks, with constraints on the weights, are well-suited for approximating Lipschitz continuous functions and exhibit upper bounds on both the depth and size.

### Entropic estimation of optimal transport maps

- Computer Science, Mathematics
- 2021

A computationally tractable method for estimating the optimal map between two distributions over R d with rigorous Monte Carlo guarantees and comparable statistical performance to other estimators in the literature, but with much lower computational cost is developed.

### The Many Faces of 1-Lipschitz Neural Networks

- Computer ScienceArXiv
- 2021

It is demonstrated that, despite being empirically harder to train, 1-Lipschitz neural networks are theoretically better grounded than unconstrained ones when it comes to classification.

### Wasserstein-2 Generative Networks

- Computer ScienceICLR
- 2021

This paper proposes a novel end-to-end algorithm for training generative models which uses a non-minimax objective simplifying model training and uses the approximation of Wasserstein-2 distance by Input Convex Neural Networks.

### LARGE SCALE OPTIMAL TRANSPORT

- Computer Science
- 2019

An implicit generative learning-based framework called SPOT (Scalable Push-forward of Optimal Transport) is proposed, which approximate the optimal transport plan by a pushforward of a reference distribution, and cast the optimal Transport problem into a minimax problem.