• Corpus ID: 244478368

A Distributed Parallel Optimization Algorithm via Alternating Direction Method of Multipliers

@article{Liu2021ADP,
  title={A Distributed Parallel Optimization Algorithm via Alternating Direction Method of Multipliers},
  author={Ziye Liu and Fanghong Guo and W. Wang and Xiaoqun Wu},
  journal={ArXiv},
  year={2021},
  volume={abs/2111.10494}
}
Alternating Direction Method of Multipliers (ADMM) algorithm has been widely adopted for solving the distributed optimization problem (DOP). In this paper, a new distributed parallel ADMM algorithm is proposed, which allows the agents to update their local states and dual variables in a completely distributed and parallel manner by modifying the existing distributed sequential ADMM. Moreover, the updating rules and storage method for variables are illustrated. It is shown that all the agents… 

Figures from this paper

References

SHOWING 1-10 OF 34 REFERENCES
Parallel alternating direction method of multipliers
Asynchronous Distributed ADMM for Consensus Optimization
TLDR
An asynchronous ADMM algorithm is proposed by using two conditions to control the asynchrony: partial barrier and bounded delay and achieves faster convergence than its synchronous counterpart in terms of the wall clock time.
Distributed Alternating Direction Method of Multipliers
  • Ermin Wei, A. Ozdaglar
  • Computer Science, Mathematics
    2012 IEEE 51st IEEE Conference on Decision and Control (CDC)
  • 2012
TLDR
This paper introduces a new distributed optimization algorithm based on Alternating Direction Method of Multipliers (ADMM), which is a classical method for sequentially decomposing optimization problems with coupled constraints and shows that this algorithm converges at the rate O (1/k).
On the O(1=k) convergence of asynchronous distributed alternating Direction Method of Multipliers
  • Ermin Wei, A. Ozdaglar
  • Mathematics, Computer Science
    2013 IEEE Global Conference on Signal and Information Processing
  • 2013
TLDR
A novel asynchronous ADMM based distributed method is presented for the general formulation of a network of agents that are cooperatively solving a global optimization problem and it is shown that it converges at the rate O (1=k).
On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
TLDR
This paper establishes its linear convergence rate for the decentralized consensus optimization problem with strongly convex local objective functions in terms of the network topology, the properties ofLocal objective functions, and the algorithm parameter.
Distributed Subgradient Methods for Multi-Agent Optimization
TLDR
The authors' convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.
Convergence of Asynchronous Distributed Gradient Methods Over Stochastic Networks
TLDR
A new asynchronous distributed gradient method (AsynDGM) based on consensus theory is developed that not only allows for asynchronous implementation in a completely distributed manner but also is able to seek the exact optimum even with constant stepsizes.
A survey of distributed optimization
...
1
2
3
4
...