Corpus ID: 220714137

DC-DistADMM: ADMM Algorithm for Contrained Distributed Optimization over Directed Graphs

@inproceedings{Khatana2020DCDistADMMAA,
  title={DC-DistADMM: ADMM Algorithm for Contrained Distributed Optimization over Directed Graphs},
  author={Vivek Khatana and M. Salapaka},
  year={2020}
}
We present a distributed algorithm to solve a multi-agent optimization problem, where the global objective function is the sum $n$ convex objective functions. Our focus is on constrained problems where the agents' estimates are restricted to be in different convex sets. The interconnection topology among the $n$ agents has directed links and each agent $i$ can only communicate with agents in its neighborhood determined by a directed graph. In this article, we propose an algorithm called… Expand
Fast Quantized Average Consensus over Static and Dynamic Directed Graphs
TLDR
This paper presents and analyzes a distributed averaging algorithm which operates exclusively with quantized values and extends the operation of the algorithm to achieve finitetime convergence in the presence of a dynamic directed communication topology subject to some connectivity conditions. Expand

References

SHOWING 1-10 OF 75 REFERENCES
D-DistADMM: A O(1/k) Distributed ADMM for Distributed Optimization in Directed Graph Topologies
TLDR
It is shown that for convex and not-necessarily differentiable objective functions the proposed D-DistADMM method converges at a rate O(1/k), where k is the iteration counter, in terms the difference between the Lagrangian function evaluated at any iteration k of the D- DistADMM algorithm and the optimal solution. Expand
Gradient-Consensus: Linearly Convergent Distributed Optimization Algorithm over Directed Graphs
TLDR
A "optimize then agree" framework to decouple the gradient-descent step and the consensus step in the distributed optimization algorithms is proposed and a novel distributed algorithm is developed to solve a multi-agent convex optimization problem. Expand
Gradient-Consensus Method for Distributed Optimization in Directed Multi-Agent Networks
TLDR
It is shown that the estimate of the optimal solution at any local agent i converges geometrically to the optimal solutions within an O(ρ) neighborhood, where ρ can be chosen to be arbitrarily small. Expand
EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
TLDR
A novel decentralized exact first-order algorithm (abbreviated as EXTRA) to solve the consensus optimization problem and uses a fixed, large step size, which can be determined independently of the network size or topology. Expand
D-ADMM: A Communication-Efficient Distributed Algorithm for Separable Optimization
TLDR
D-ADMM is proven to converge when the network is bipartite or when all the functions are strongly convex, although in practice, convergence is observed even when these conditions are not met. Expand
FlexPD: A Flexible Framework of First-Order Primal-Dual Algorithms for Distributed Optimization
TLDR
A flexible framework of first-order primal-dual algorithms (FlexPD), which allows for an arbitrary number of primal steps per iteration, and establishes linear convergence of the proposed framework to the optimal solution for strongly convex and Lipschitz gradient objective functions. Expand
Cooperative Convex Optimization in Networked Systems: Augmented Lagrangian Algorithms With Directed Gossip Communication
TLDR
This work solves the problem for generic connected network topologies with asymmetric random link failures with a novel distributed, de-centralized algorithm, and proposes a novel, Gauss-Seidel type, randomized algorithm, at a fast time scale. Expand
Linear Time Average Consensus and Distributed Optimization on Fixed Graphs
  • A. Olshevsky
  • Computer Science, Mathematics
  • SIAM J. Control. Optim.
  • 2017
TLDR
A protocol for the average consensus problem on any fixed undirected graph whose convergence time scales linearly in the total number nodes $n$ and has error which is $O(L \sqrt{n/T})$. Expand
Push–Pull Gradient Methods for Distributed Optimization in Networks
TLDR
“push–pull” is the first class of algorithms for distributed optimization over directed graphs for strongly convex and smooth objective functions over a network and outperform other existing linearly convergent schemes, especially for ill-conditioned problems and networks that are not well balanced. Expand
Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
TLDR
This paper introduces a distributed algorithm, referred to as DIGing, based on a combination of a distributed inexact gradient method and a gradient tracking technique that converges to a global and consensual minimizer over time-varying graphs. Expand
...
1
2
3
4
5
...