# DC-DistADMM: ADMM Algorithm for Contrained Distributed Optimization over Directed Graphs

@inproceedings{Khatana2020DCDistADMMAA, title={DC-DistADMM: ADMM Algorithm for Contrained Distributed Optimization over Directed Graphs}, author={Vivek Khatana and M. Salapaka}, year={2020} }

We present a distributed algorithm to solve a multi-agent optimization problem, where the global objective function is the sum $n$ convex objective functions. Our focus is on constrained problems where the agents' estimates are restricted to be in different convex sets. The interconnection topology among the $n$ agents has directed links and each agent $i$ can only communicate with agents in its neighborhood determined by a directed graph. In this article, we propose an algorithm called… Expand

#### Figures, Tables, and Topics from this paper

#### One Citation

Fast Quantized Average Consensus over Static and Dynamic Directed Graphs

- Engineering, Computer Science
- ArXiv
- 2021

This paper presents and analyzes a distributed averaging algorithm which operates exclusively with quantized values and extends the operation of the algorithm to achieve finitetime convergence in the presence of a dynamic directed communication topology subject to some connectivity conditions. Expand

#### References

SHOWING 1-10 OF 75 REFERENCES

D-DistADMM: A O(1/k) Distributed ADMM for Distributed Optimization in Directed Graph Topologies

- Mathematics, Computer Science
- 2020 59th IEEE Conference on Decision and Control (CDC)
- 2020

It is shown that for convex and not-necessarily differentiable objective functions the proposed D-DistADMM method converges at a rate O(1/k), where k is the iteration counter, in terms the difference between the Lagrangian function evaluated at any iteration k of the D- DistADMM algorithm and the optimal solution. Expand

Gradient-Consensus: Linearly Convergent Distributed Optimization Algorithm over Directed Graphs

- Engineering, Computer Science
- 2019

A "optimize then agree" framework to decouple the gradient-descent step and the consensus step in the distributed optimization algorithms is proposed and a novel distributed algorithm is developed to solve a multi-agent convex optimization problem. Expand

Gradient-Consensus Method for Distributed Optimization in Directed Multi-Agent Networks

- Computer Science
- 2020 American Control Conference (ACC)
- 2020

It is shown that the estimate of the optimal solution at any local agent i converges geometrically to the optimal solutions within an O(ρ) neighborhood, where ρ can be chosen to be arbitrarily small. Expand

EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization

- Computer Science, Mathematics
- SIAM J. Optim.
- 2015

A novel decentralized exact first-order algorithm (abbreviated as EXTRA) to solve the consensus optimization problem and uses a fixed, large step size, which can be determined independently of the network size or topology. Expand

D-ADMM: A Communication-Efficient Distributed Algorithm for Separable Optimization

- Computer Science, Mathematics
- IEEE Transactions on Signal Processing
- 2013

D-ADMM is proven to converge when the network is bipartite or when all the functions are strongly convex, although in practice, convergence is observed even when these conditions are not met. Expand

FlexPD: A Flexible Framework of First-Order Primal-Dual Algorithms for Distributed Optimization

- Computer Science, Mathematics
- IEEE Transactions on Signal Processing
- 2021

A flexible framework of first-order primal-dual algorithms (FlexPD), which allows for an arbitrary number of primal steps per iteration, and establishes linear convergence of the proposed framework to the optimal solution for strongly convex and Lipschitz gradient objective functions. Expand

Cooperative Convex Optimization in Networked Systems: Augmented Lagrangian Algorithms With Directed Gossip Communication

- Mathematics, Computer Science
- IEEE Transactions on Signal Processing
- 2011

This work solves the problem for generic connected network topologies with asymmetric random link failures with a novel distributed, de-centralized algorithm, and proposes a novel, Gauss-Seidel type, randomized algorithm, at a fast time scale. Expand

Linear Time Average Consensus and Distributed Optimization on Fixed Graphs

- Computer Science, Mathematics
- SIAM J. Control. Optim.
- 2017

A protocol for the average consensus problem on any fixed undirected graph whose convergence time scales linearly in the total number nodes $n$ and has error which is $O(L \sqrt{n/T})$. Expand

Push–Pull Gradient Methods for Distributed Optimization in Networks

- Computer Science
- IEEE Transactions on Automatic Control
- 2021

“push–pull” is the first class of algorithms for distributed optimization over directed graphs for strongly convex and smooth objective functions over a network and outperform other existing linearly convergent schemes, especially for ill-conditioned problems and networks that are not well balanced. Expand

Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs

- Mathematics, Computer Science
- SIAM J. Optim.
- 2017

This paper introduces a distributed algorithm, referred to as DIGing, based on a combination of a distributed inexact gradient method and a gradient tracking technique that converges to a global and consensual minimizer over time-varying graphs. Expand