Corpus ID: 232170614

Semi-Discrete Optimal Transport: Hardness, Regularization and Numerical Solution

@article{Taskesen2021SemiDiscreteOT,
  title={Semi-Discrete Optimal Transport: Hardness, Regularization and Numerical Solution},
  author={Bahar Taskesen and Soroosh Shafieezadeh-Abadeh and Daniel Kuhn},
  journal={ArXiv},
  year={2021},
  volume={abs/2103.06263}
}
Semi-discrete optimal transport problems, which evaluate the Wasserstein distance between a discrete and a generic (possibly non-discrete) probability measure, are believed to be computationally hard. Even though such problems are ubiquitous in statistics, machine learning and computer vision, however, this perception has not yet received a theoretical justification. To fill this gap, we prove that computing the Wasserstein distance between a discrete probability measure supported on two points… Expand

Figures from this paper

References

SHOWING 1-10 OF 168 REFERENCES
Stochastic Optimization for Large-scale Optimal Transport
TLDR
A new class of stochastic optimization algorithms to cope with large-scale problems routinely encountered in machine learning applications, based on entropic regularization of the primal OT problem, which results in a smooth dual optimization optimization which can be addressed with algorithms that have a provably faster convergence. Expand
Entropic regularization of continuous optimal transport problems
We analyze continuous optimal transport problems in the so-called Kantorovich form, where we seek a transport plan between two marginals that are probability measures on compact subsets of EuclideanExpand
A Newton algorithm for semi-discrete optimal transport
TLDR
The purpose of this article is to bridge the gap between theory and practice by introducing a damped Newton's algorithm which is experimentally efficient and by proving the global convergence of this algorithm with optimal rates. Expand
Smooth and Sparse Optimal Transport
TLDR
This paper explores regularizing the primal and dual OT formulations with a strongly convex term, which corresponds to relaxing the dual and primal constraints with smooth approximations, and shows how to incorporate squared $2-norm and group lasso regularizations within that framework, leading to sparse and group-sparse transportation plans. Expand
Regularized Discrete Optimal Transport
TLDR
A generalization of discrete Optimal Transport that includes a regularity penalty and a relaxation of the bijectivity constraint is introduced and an illustrative application of this discrete regularized transport to color transfer between images is shown. Expand
Regularized Optimal Transport and the Rot Mover's Distance
TLDR
A unified framework for smooth convex regularization of discrete optimal transport problems, which naturally generalizes a previously proposed regularization based on the Boltzmann-Shannon entropy related to the Kullback-Leibler divergence, and solved with the Sinkhorn-Knopp algorithm. Expand
Iterative Bregman Projections for Regularized Transportation Problems
TLDR
It is shown that for many problems related to optimal transport, the set of linear constraints can be split in an intersection of a few simple constraints, for which the projections can be computed in closed form. Expand
Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations
TLDR
It is demonstrated that the distributionally robust optimization problems over Wasserstein balls can in fact be reformulated as finite convex programs—in many interesting cases even as tractable linear programs. Expand
Entropic Approximation of Wasserstein Gradient Flows
  • G. Peyré
  • Computer Science, Mathematics
  • SIAM J. Imaging Sci.
  • 2015
TLDR
A novel numerical scheme to approximate gradient flows for optimal transport using an entropic regularization of the transportation coupling, which allows one to trade the initial Wasserstein fidelity term for a Kulback-Leibler divergence, which is easier to deal with numerically. Expand
Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n)
We consider the stochastic approximation problem where a convex function has to be minimized, given only the knowledge of unbiased estimates of its gradients at certain points, a framework whichExpand
...
1
2
3
4
5
...