# Low-rank Optimal Transport: Approximation, Statistics and Debiasing

@article{Scetbon2022LowrankOT,
title={Low-rank Optimal Transport: Approximation, Statistics and Debiasing},
author={Meyer Scetbon and Marco Cuturi},
journal={ArXiv},
year={2022},
volume={abs/2205.12365}
}
• Published 24 May 2022
• Computer Science
• ArXiv
The matching principles behind optimal transport (OT) play an increasingly important role in machine learning, a trend which can be observed when OT is used to disambiguate datasets in applications (e.g. single-cell genomics) or used to improve more complex methods (e.g. balanced attention in transformers or self-supervised learning). To scale to more challenging problems, there is a growing consensus that OT requires solvers that can operate on millions, not thousands, of points. The low-rank…

## References

SHOWING 1-10 OF 30 REFERENCES
Extensions to McDiarmid's inequality when dierences are bounded with high probability
The method of independent bounded differences (McDiarmid, 1989) gives largedeviation concentration bounds for multivariate functions in terms of the maximum effect that changing one coordinate of the
Scikit-learn: Machine Learning in Python
• Computer Science
J. Mach. Learn. Res.
• 2011
Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing
Linear-Time Gromov Wasserstein Distances using Low Rank Couplings and Costs
• Computer Science
ICML
• 2022
This work shows how a recent variant of the OT problem that restricts the set of admissible couplings to those having a low-rank factorization is remarkably well suited to the resolution of GW, and shows that this approach is not only able to compute a stationary point of the GW problem in time O ( n 2 ) , but also uniquely positioned to beneﬁt from the knowledge that the initial cost matrices are low- Rank.
Sample Complexity of Sinkhorn Divergences
• Computer Science
AISTATS
• 2019
A bound is derived on the approximation error made with SDs when approximating OT as a function of the regularizer \$\varepsilon", it is proved that the optimizers of regularized OT are bounded in a Sobolev (RKHS) ball independent of the two measures and the first sample complexity bound for SDs is provided.
Optimal Transport Tools (OTT): A JAX Toolbox for all things Wasserstein
• Computer Science
ArXiv
• 2022
OTT-JAX is a python toolbox that can solve optimal transport problems between point clouds and histograms and builds on various JAX features, such as automatic and custom reverse mode differentiation, vectorization, just-in-time compilation and accelerators support.
Approximating Optimal Transport via Low-rank and Sparse Factorization
• Computer Science
ArXiv
• 2021
A novel approximation for OT is proposed, in which the transport plan can be decomposed into the sum of a low-rank matrix and a sparse one, and an augmented Lagrangian method is designed to efficiently calculate the transportPlan.
Stochastic Mirror Descent: Convergence Analysis and Adaptive Variants via the Mirror Stochastic Polyak Stepsize
• Computer Science
ArXiv
• 2021
New convergence guarantees for SMD with a constant stepsize are provided and a new adaptive stepsize scheme — the mirror stochastic Polyak stepsize (mSPS) is proposed — remains both practical and efficient for modern machine learning applications while inheriting the benefits of mirror descent.
Faster Wasserstein Distance Estimation with the Sinkhorn Divergence
• Computer Science
NeurIPS
• 2020
An estimator based on Richardson extrapolation of the Sinkhorn divergence is proposed which enjoys improved statistical and computational efficiency guarantees, under a condition on the regularity of the approximation error, which is in particular satisfied for Gaussian densities.