A fast distributed proximal-gradient method
@article{Chen2012AFD, title={A fast distributed proximal-gradient method}, author={Annie I. Chen and Asuman E. Ozdaglar}, journal={2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton)}, year={2012}, pages={601-608} }
We present a distributed proximal-gradient method for optimizing the average of convex functions, each of which is the private local objective of an agent in a network with time-varying topology. The local objectives have distinct differentiable components, but they share a common nondifferentiable component, which has a favorable structure suitable for effective computation of the proximal operator. In our method, each agent iteratively updates its estimate of the global minimum by optimizing…
Figures from this paper
126 Citations
A Distributed Stochastic Proximal-Gradient Algorithm for Composite Optimization
- Computer Science, MathematicsIEEE Transactions on Control of Network Systems
- 2021
This article develops a distributed stochastic proximal-gradient algorithm to tackle distributed composite optimization problems involving a common non-smooth regularization term over an undirected and connected network by employing the local unbiased stochastically averaging gradient method.
Distributed and Inexact Proximal Gradient Method for Online Convex Optimization
- Computer Science2021 European Control Conference (ECC)
- 2021
It is shown that the tracking error of the online inexact DPGM is upper-bounded by a convergent linear system, guaranteeing convergence within a neighborhood of the optimal solution.
Fast Distributed Gradient Methods
- Computer ScienceIEEE Transactions on Automatic Control
- 2014
This work proposes two fast distributed gradient algorithms based on the centralized Nesterov gradient algorithm and establishes their convergence rates in terms of the per-node communications K and theper-node gradient evaluations k.
DISTRIBUTED PROXIMAL-GRADIENT METHOD FOR CONVEX OPTIMIZATION WITH INEQUALITY CONSTRAINTS
- Computer Science, MathematicsThe ANZIAM Journal
- 2014
This work first transforms the constrained optimization problem to an unconstrained one, using the exact penalty function method, and proposes a distributed proximal-gradient algorithm over a time-changing connectivity network, and establishes a convergence rate depending on the number of iterations, the network topology and thenumber of agents.
Distributed consensus-based multi-agent convex optimization via gradient tracking technique
- Mathematics, Computer ScienceJ. Frankl. Inst.
- 2019
Distributed proximal gradient algorithm for non-smooth non-convex optimization over time-varying networks
- Computer Science, Mathematics
- 2021
This paper presents a distributed proximal gradient algorithm for the non-smooth non-convex optimization problem over time-varying multi-agent networks and proves that the generated local variables achieve consensus and converge to the set of critical points with convergence rate O(1/T).
On the Convergence of Nested Decentralized Gradient Methods With Multiple Consensus and Gradient Steps
- Mathematics, Computer ScienceIEEE Transactions on Signal Processing
- 2021
This paper extends and generalizes the analysis for a class of nested gradient-based distributed algorithms (NEAR-DGD) to account for multiple gradient steps at every iteration, and proves R-Linear convergence to the exact solution with a fixed number of gradient steps and increasing number of consensus steps.
An Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization
- Computer ScienceICML
- 2015
This work shows that any limit point of DFAL iterates is optimal; and for any e < 0, an e-optimal and e-feasible solution can be computed within O(log(e-1)) DFAL iterations, which require O(ψmax1.5/dmin e-1) proximal gradient computations and communications per node in total.
A Decentralized Proximal-Gradient Method With Network Independent Step-Sizes and Separated Convergence Rates
- Computer ScienceIEEE Transactions on Signal Processing
- 2019
This paper proposes a novel proximal-gradient algorithm for a decentralized optimization problem with a composite objective containing smooth and nonsmooth terms that is as good as one of the two convergence rates that match the typical rates for the general gradient descent and the consensus averaging.
Asynchronous Distributed Optimization Via Randomized Dual Proximal Gradient
- Computer Science, MathematicsIEEE Transactions on Automatic Control
- 2017
This paper shows that, by choosing suitable primal variable copies, the dual problem is itself separable when written in terms of conjugate functions, and the dual variables can be stacked into non-overlapping blocks associated to the computing nodes.
25 References
Distributed Alternating Direction Method of Multipliers
- Computer Science, Mathematics2012 IEEE 51st IEEE Conference on Decision and Control (CDC)
- 2012
This paper introduces a new distributed optimization algorithm based on Alternating Direction Method of Multipliers (ADMM), which is a classical method for sequentially decomposing optimization problems with coupled constraints and shows that this algorithm converges at the rate O (1/k).
Fast Distributed Gradient Methods
- Computer ScienceIEEE Transactions on Automatic Control
- 2014
This work proposes two fast distributed gradient algorithms based on the centralized Nesterov gradient algorithm and establishes their convergence rates in terms of the per-node communications K and theper-node gradient evaluations k.
Distributed Subgradient Methods for Multi-Agent Optimization
- Mathematics, Computer ScienceIEEE Transactions on Automatic Control
- 2009
The authors' convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.
Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Computer ScienceIEEE Transactions on Automatic Control
- 2012
This work develops and analyze distributed algorithms based on dual subgradient averaging and provides sharp bounds on their convergence rates as a function of the network size and topology, and shows that the number of iterations required by the algorithm scales inversely in the spectral gap of thenetwork.
Distributed Subgradient Methods for Convex Optimization Over Random Networks
- Computer Science, MathematicsIEEE Transactions on Automatic Control
- 2011
This work proposes a distributed subgradient method that uses averaging algorithms for locally sharing information among the agents for cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents.
Asynchronous gossip algorithms for stochastic optimization
- Mathematics, Computer ScienceProceedings of the 48h IEEE Conference on Decision and Control (CDC) held jointly with 2009 28th Chinese Control Conference
- 2009
An asynchronous algorithm that is motivated by random gossip schemes where each agent has a local Poisson clock and it is proved that the gradients converge to zero with probability 1 and the iterates converge to an optimal solution almost surely.
Constrained Consensus and Optimization in Multi-Agent Networks
- MathematicsIEEE Transactions on Automatic Control
- 2010
A distributed "projected subgradient algorithm" which involves each agent performing a local averaging operation, taking a subgradient step to minimize its own objective function, and projecting on its constraint set, and it is shown that, with an appropriately selected stepsize rule, the agent estimates generated by this algorithm converge to the same optimal solution.
Subgradient methods and consensus algorithms for solving convex optimization problems
- Mathematics2008 47th IEEE Conference on Decision and Control
- 2008
This paper proposes a subgradient method for solving coupled optimization problems in a distributed way given restrictions on the communication topology and studies convergence properties of the proposed scheme using results from consensus theory and approximate subgradient methods.
Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization
- Computer ScienceNIPS
- 2011
This work shows that both the basic proximal-gradient method and the accelerated proximal - gradient method achieve the same convergence rate as in the error-free case, provided that the errors decrease at appropriate rates.
Coordination of groups of mobile autonomous agents using nearest neighbor rules
- Computer ScienceProceedings of the 41st IEEE Conference on Decision and Control, 2002.
- 2002
Simulation results are provided which demonstrate that the nearest neighbor rule they are studying can cause all agents to eventually move in the same direction despite the absence of centralized coordination and despite the fact that each agent's set of nearest neighbors change with time as the system evolves.