The first algorithm analyzed has better dependence on $\varepsilon$ in the complexity bound, but also is not specific to entropic regularization and can solve the OT problem with different regularizers.Expand

This work studies dual-based algorithms for distributed convex optimization problems over networks, and proposes distributed algorithms that achieve the same optimal rates as their centralized counterparts (up to constant and logarithmic factors), with an additional optimal cost related to the spectral properties of the network.Expand

A novel accelerated primal-dual stochastic gradient method is developed and applied to the decentralized distributed optimization setting to obtain a new algorithm for the distributed semi-discrete regularized Wasserstein barycenter problem.Expand

The first method is an extension of the Intermediate Gradient Method proposed by Devolder, Glineur and Nesterov for problems with deterministic inexact oracle and can be applied to problems with composite objective function, both deterministic and stochastic inexactness of the oracle, and allows using a non-Euclidean setup.Expand

The complexity of approximating the Wasserstein barycenter of m discrete measures, or histograms of size n, is studied by contrasting two alternative approaches that use entropic regularization, and a novel proximal-IBP algorithm is proposed which is seen as a proximal gradient method.Expand

The aim of this paper is to derive the converge rate of the proposed methods and to determine a noise level which does not significantly affect the convergence rate.Expand

One of its focus is to propose a Mirror Descent with adaptive stepsizes and adaptive stopping rule for problems with objective function, which is not Lipschitz, e.g., is a quadratic function.Expand

A new tensor method is proposed, which closes the gap between the lower and upper iteration complexity bounds for convex optimization problems with the objective function having Lipshitz-continuous $p$-th order derivative, and it is shown that in practice it is faster than the best known accelerated Tensor method.Expand

Motivated by the alternating minimization nature of the Sinkhorn's algorithm and the theoretically faster convergence of accelerated gradient method, in this paper we propose a way to combine… Expand

By using mini-batching technique, it is shown that the proposed methods with stochastic oracle can be additionally parallelized at each node, which can be applied to many data science problems and inverse problems.Expand