• Publications
  • Influence
Geometrically convergent distributed optimization with uncoordinated step-sizes
TLDR
It is shown that the ATC variation of DIGing algorithm converges geometrically fast even if the step-sizes are different among the agents, which implies thatThe ATC structure can accelerate convergence compared to the distributed gradient descent (DGD) structure.
Fast Convergence Rates for Distributed Non-Bayesian Learning
TLDR
This work proposes a distributed algorithm and establishes consistency, as well as a nonasymptotic, explicit, and geometric convergence rate for the concentration of the beliefs around the set of optimal hypotheses.
A Dual Approach for Optimal Algorithms in Distributed Optimization over Networks
TLDR
This work proposes distributed algorithms that achieve the same optimal rates as their centralized counterparts (up to constant and logarithmic factors), with an additional optimal cost related to the spectral properties of the network.
Nonasymptotic convergence rates for cooperative learning over time-varying directed graphs
TLDR
This work proposes local learning dynamics which combine Bayesian updates at each node with a local aggregation rule of private agent signals that drive all agents to the set of hypotheses which best explain the data collected at all nodes as long as the sequence of interconnection graphs is uniformly strongly connected.
On the Complexity of Approximating Wasserstein Barycenters
TLDR
The complexity of approximating the Wasserstein barycenter of m discrete measures, or histograms of size n, is studied by contrasting two alternative approaches that use entropic regularization, and a novel proximal-IBP algorithm is proposed which is seen as a proximal gradient method.
Optimal Tensor Methods in Smooth Convex and Uniformly ConvexOptimization
TLDR
A new tensor method is proposed, which closes the gap between the lower and upper iteration complexity bounds for convex optimization problems with the objective function having Lipshitz-continuous $p$-th order derivative, and it is shown that in practice it is faster than the best known accelerated Tensor method.
Decentralize and Randomize: Faster Algorithm for Wasserstein Barycenters
TLDR
A novel accelerated primal-dual stochastic gradient method is developed and applied to the decentralized distributed optimization setting to obtain a new algorithm for the distributed semi-discrete regularized Wasserstein barycenter problem.
Optimal Algorithms for Distributed Optimization
TLDR
The results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix.
Distributed Computation of Wasserstein Barycenters Over Networks
TLDR
An estimate for the minimum number of communication rounds required for the proposed method to achieve arbitrary relative precision both in the optimality of the solution and the consensus among all agents for undirected fixed networks is provided.
...
...