A fast distributed proximal-gradient method

  title={A fast distributed proximal-gradient method},
  author={Annie I. Chen and Asuman E. Ozdaglar},
  journal={2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton)},
  • Annie I. ChenA. Ozdaglar
  • Published 1 October 2012
  • Computer Science, Mathematics
  • 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
We present a distributed proximal-gradient method for optimizing the average of convex functions, each of which is the private local objective of an agent in a network with time-varying topology. The local objectives have distinct differentiable components, but they share a common nondifferentiable component, which has a favorable structure suitable for effective computation of the proximal operator. In our method, each agent iteratively updates its estimate of the global minimum by optimizing… 

Figures from this paper

A Distributed Stochastic Proximal-Gradient Algorithm for Composite Optimization

This article develops a distributed stochastic proximal-gradient algorithm to tackle distributed composite optimization problems involving a common non-smooth regularization term over an undirected and connected network by employing the local unbiased stochastically averaging gradient method.

Distributed and Inexact Proximal Gradient Method for Online Convex Optimization

It is shown that the tracking error of the online inexact DPGM is upper-bounded by a convergent linear system, guaranteeing convergence within a neighborhood of the optimal solution.

Fast Distributed Gradient Methods

This work proposes two fast distributed gradient algorithms based on the centralized Nesterov gradient algorithm and establishes their convergence rates in terms of the per-node communications K and theper-node gradient evaluations k.


This work first transforms the constrained optimization problem to an unconstrained one, using the exact penalty function method, and proposes a distributed proximal-gradient algorithm over a time-changing connectivity network, and establishes a convergence rate depending on the number of iterations, the network topology and thenumber of agents.

Distributed proximal gradient algorithm for non-smooth non-convex optimization over time-varying networks

This paper presents a distributed proximal gradient algorithm for the non-smooth non-convex optimization problem over time-varying multi-agent networks and proves that the generated local variables achieve consensus and converge to the set of critical points with convergence rate O(1/T).

On the Convergence of Nested Decentralized Gradient Methods With Multiple Consensus and Gradient Steps

This paper extends and generalizes the analysis for a class of nested gradient-based distributed algorithms (NEAR-DGD) to account for multiple gradient steps at every iteration, and proves R-Linear convergence to the exact solution with a fixed number of gradient steps and increasing number of consensus steps.

An Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization

This work shows that any limit point of DFAL iterates is optimal; and for any e < 0, an e-optimal and e-feasible solution can be computed within O(log(e-1)) DFAL iterations, which require O(ψmax1.5/dmin e-1) proximal gradient computations and communications per node in total.

A Decentralized Proximal-Gradient Method With Network Independent Step-Sizes and Separated Convergence Rates

This paper proposes a novel proximal-gradient algorithm for a decentralized optimization problem with a composite objective containing smooth and nonsmooth terms that is as good as one of the two convergence rates that match the typical rates for the general gradient descent and the consensus averaging.

Asynchronous Distributed Optimization Via Randomized Dual Proximal Gradient

This paper shows that, by choosing suitable primal variable copies, the dual problem is itself separable when written in terms of conjugate functions, and the dual variables can be stacked into non-overlapping blocks associated to the computing nodes.

Distributed Alternating Direction Method of Multipliers

  • Ermin WeiA. Ozdaglar
  • Computer Science, Mathematics
    2012 IEEE 51st IEEE Conference on Decision and Control (CDC)
  • 2012
This paper introduces a new distributed optimization algorithm based on Alternating Direction Method of Multipliers (ADMM), which is a classical method for sequentially decomposing optimization problems with coupled constraints and shows that this algorithm converges at the rate O (1/k).

Fast Distributed Gradient Methods

This work proposes two fast distributed gradient algorithms based on the centralized Nesterov gradient algorithm and establishes their convergence rates in terms of the per-node communications K and theper-node gradient evaluations k.

Distributed Subgradient Methods for Multi-Agent Optimization

The authors' convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.

Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling

This work develops and analyze distributed algorithms based on dual subgradient averaging and provides sharp bounds on their convergence rates as a function of the network size and topology, and shows that the number of iterations required by the algorithm scales inversely in the spectral gap of thenetwork.

Distributed Subgradient Methods for Convex Optimization Over Random Networks

This work proposes a distributed subgradient method that uses averaging algorithms for locally sharing information among the agents for cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents.

Asynchronous gossip algorithms for stochastic optimization

An asynchronous algorithm that is motivated by random gossip schemes where each agent has a local Poisson clock and it is proved that the gradients converge to zero with probability 1 and the iterates converge to an optimal solution almost surely.

Constrained Consensus and Optimization in Multi-Agent Networks

A distributed "projected subgradient algorithm" which involves each agent performing a local averaging operation, taking a subgradient step to minimize its own objective function, and projecting on its constraint set, and it is shown that, with an appropriately selected stepsize rule, the agent estimates generated by this algorithm converge to the same optimal solution.

Subgradient methods and consensus algorithms for solving convex optimization problems

This paper proposes a subgradient method for solving coupled optimization problems in a distributed way given restrictions on the communication topology and studies convergence properties of the proposed scheme using results from consensus theory and approximate subgradient methods.

Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization

This work shows that both the basic proximal-gradient method and the accelerated proximal - gradient method achieve the same convergence rate as in the error-free case, provided that the errors decrease at appropriate rates.

Coordination of groups of mobile autonomous agents using nearest neighbor rules

Simulation results are provided which demonstrate that the nearest neighbor rule they are studying can cause all agents to eventually move in the same direction despite the absence of centralized coordination and despite the fact that each agent's set of nearest neighbors change with time as the system evolves.