• Publications
  • Influence
Computational Optimal Transport: Complexity by Accelerated Gradient Descent Is Better Than by Sinkhorn's Algorithm
TLDR
The first algorithm analyzed has better dependence on $\varepsilon$ in the complexity bound, but also is not specific to entropic regularization and can solve the OT problem with different regularizers. Expand
A dual approach for optimal algorithms in distributed optimization over networks
TLDR
This work studies dual-based algorithms for distributed convex optimization problems over networks, and proposes distributed algorithms that achieve the same optimal rates as their centralized counterparts (up to constant and logarithmic factors), with an additional optimal cost related to the spectral properties of the network. Expand
Decentralize and Randomize: Faster Algorithm for Wasserstein Barycenters
TLDR
A novel accelerated primal-dual stochastic gradient method is developed and applied to the decentralized distributed optimization setting to obtain a new algorithm for the distributed semi-discrete regularized Wasserstein barycenter problem. Expand
Stochastic Intermediate Gradient Method for Convex Problems with Stochastic Inexact Oracle
TLDR
The first method is an extension of the Intermediate Gradient Method proposed by Devolder, Glineur and Nesterov for problems with deterministic inexact oracle and can be applied to problems with composite objective function, both deterministic and stochastic inexactness of the oracle, and allows using a non-Euclidean setup. Expand
On the Complexity of Approximating Wasserstein Barycenters
TLDR
The complexity of approximating the Wasserstein barycenter of m discrete measures, or histograms of size n, is studied by contrasting two alternative approaches that use entropic regularization, and a novel proximal-IBP algorithm is proposed which is seen as a proximal gradient method. Expand
Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case
TLDR
The aim of this paper is to derive the converge rate of the proposed methods and to determine a noise level which does not significantly affect the convergence rate. Expand
Mirror Descent and Convex Optimization Problems with Non-smooth Inequality Constraints
TLDR
One of its focus is to propose a Mirror Descent with adaptive stepsizes and adaptive stopping rule for problems with objective function, which is not Lipschitz, e.g., is a quadratic function. Expand
Optimal Tensor Methods in Smooth Convex and Uniformly ConvexOptimization
TLDR
A new tensor method is proposed, which closes the gap between the lower and upper iteration complexity bounds for convex optimization problems with the objective function having Lipshitz-continuous $p$-th order derivative, and it is shown that in practice it is faster than the best known accelerated Tensor method. Expand
Accelerated Alternating Minimization, Accelerated Sinkhorn's Algorithm and Accelerated Iterative Bregman Projections.
Motivated by the alternating minimization nature of the Sinkhorn's algorithm and the theoretically faster convergence of accelerated gradient method, in this paper we propose a way to combineExpand
Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
TLDR
By using mini-batching technique, it is shown that the proposed methods with stochastic oracle can be additionally parallelized at each node, which can be applied to many data science problems and inverse problems. Expand
...
1
2
3
4
5
...