• Corpus ID: 238253041

On the Convergence of Projected Alternating Maximization for Equitable and Optimal Transport

@article{Huang2021OnTC,
  title={On the Convergence of Projected Alternating Maximization for Equitable and Optimal Transport},
  author={Minhui Huang and Shiqian Ma and Lifeng Lai},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.15030}
}
This paper studies the equitable and optimal transport (EOT) problem, which has many applications such as fair division problems and optimal transport with multiple agents etc. In the discrete distributions case, the EOT problem can be formulated as a linear program (LP). Since this LP is prohibitively large for general LP solvers, Scetbon et al. [21] suggests to perturb the problem by adding an entropy regularization. They proposed a projected alternating maximization algorithm (PAM) to solve… 

Figures from this paper

Efficiently Escaping Saddle Points in Bilevel Optimization

An inexact NEgative-curvature-Originated-from-Noise Algorithm (iNEON), a pure first-order algorithm that can escape saddle point and local minimum of stochastic bilevel optimization is proposed.

References

SHOWING 1-10 OF 27 REFERENCES

Iterative Bregman Projections for Regularized Transportation Problems

It is shown that for many problems related to optimal transport, the set of linear constraints can be split in an intersection of a few simple constraints, for which the projections can be computed in closed form.

Equitable and Optimal Transport with Multiple Agents

This work introduces an extension of the Optimal Transport problem when multiple costs are involved, and provides an entropic regularization of that problem which leads to an alternative algorithm faster than the standard linear program.

A Block Coordinate Descent Method for Regularized Multiconvex Optimization with Applications to Nonnegative Tensor Factorization and Completion

This paper considers regularized block multiconvex optimization, where the feasible set and objective function are generally nonconvex but convex in each block of variables and proposes a generalized block coordinate descent method.

A Riemannian Block Coordinate Descent Method for Computing the Projection Robust Wasserstein Distance

A Riemannian block coordinate descent (RBCD) method based on a novel reformulation of the regularized max-min problem over the Stiefel manifold is proposed, which has very low per-iteration complexity, and hence is suitable for large-scale problems.

Iteration complexity analysis of block coordinate descent methods

This paper unify these algorithms under the so-called block successive upper-bound minimization (BSUM) framework, and shows that for a broad class of multi-block nonsmooth convex problems, all algorithms achieve a global sublinear iteration complexity of O(1/r), where r is the iteration index.

Projection Robust Wasserstein Distance and Riemannian Optimization

A first step into a computational theory of the PRW distance is provided and the links between optimal transport and Riemannian optimization are provided.

Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent

To the best of the knowledge, this is the first Hessian-free algorithm to find a second-order stationary point faster than GD, and also the first single-loop algorithm with a faster rate than GD even in the setting of finding a first- order stationary point.

Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration

This paper demonstrates that general optimal transport distances can be approximated in near-linear time by Cuturi's Sinkhorn Distances, and directly suggests a new greedy coordinate descent algorithm, Greenkhorn, with the same theoretical guarantees.

Introductory Lectures on Convex Optimization - A Basic Course

It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization, and it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments.

A Distributional Perspective on Reinforcement Learning

This paper argues for the fundamental importance of the value distribution: the distribution of the random return received by a reinforcement learning agent, and designs a new algorithm which applies Bellman's equation to the learning of approximate value distributions.