Distributed Subgradient Methods for Multi-Agent Optimization
- A. Nedić, A. Ozdaglar
- Mathematics, Computer ScienceIEEE Transactions on Automatic Control
- 13 January 2009
The authors' convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.
Constrained Consensus and Optimization in Multi-Agent Networks
A distributed "projected subgradient algorithm" which involves each agent performing a local averaging operation, taking a subgradient step to minimize its own objective function, and projecting on its constraint set, and it is shown that, with an appropriately selected stepsize rule, the agent estimates generated by this algorithm converge to the same optimal solution.
Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- A. Nedić, Alexander Olshevsky, Wei Shi
- Mathematics, Computer ScienceSIAM Journal on Optimization
- 12 July 2016
This paper introduces a distributed algorithm, referred to as DIGing, based on a combination of a distributed inexact gradient method and a gradient tracking technique that converges to a global and consensual minimizer over time-varying graphs.
Distributed optimization over time-varying directed graphs
- A. Nedić, Alexander Olshevsky
- Computer Science, MathematicsIEEE Conference on Decision and Control
- 9 March 2013
This work develops a broadcast-based algorithm, termed the subgradient-push, which steers every node to an optimal value under a standard assumption of subgradient boundedness, which converges at a rate of O (ln t/√t), where the constant depends on the initial values at the nodes, the sub gradient norms, and, more interestingly, on both the consensus speed and the imbalances of influence among the nodes.
Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization
- S. Ram, A. Nedić, V. Veeravalli
- Mathematics, Computer ScienceJournal of Optimization Theory and Applications
- 16 November 2008
This paper considers a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set, and investigates the effects of stochastic subgradient errors on the convergence of the algorithm.
Incremental subgradient methods for nondifferentiable optimization
- A. Nedić, D. Bertsekas
- Mathematics, Computer ScienceProceedings of the 38th IEEE Conference on…
- 7 December 1999
The convergence properties of a number of variants of incremental subgradient methods, including some that are stochastic are established, which appear very promising and effective for important classes of large problems.
Subgradient Methods for Saddle-Point Problems
- A. Nedić, A. Ozdaglar
- Computer Science, MathematicsJ. Optimization Theory and Applications
- 5 March 2009
This work presents a subgradient algorithm for generating approximate saddle points and provides per-iteration convergence rate estimates on the constructed solutions, and focuses on Lagrangian duality, where it is shown this algorithm is particularly well-suited for problems where the subgradient of the dual function cannot be evaluated easily.
Approximate Primal Solutions and Rate Analysis for Dual Subgradient Methods
This work provides estimates on the primal infeasibility and primal suboptimality of the generated approximate primal solutions and provides a basis for analyzing the trade-offs between the desired level of error and the selection of the stepsize value.
Network Topology and Communication-Computation Tradeoffs in Decentralized Optimization
This paper presents an overview of recent work in decentralized optimization and surveys the state-of-theart algorithms and their analyses tailored to these different scenarios, highlighting the role of the network topology.