• Corpus ID: 59222801

# Asynchronous Decentralized Optimization in Directed Networks

@article{Zhang2019AsynchronousDO,
title={Asynchronous Decentralized Optimization in Directed Networks},
author={Jiaqi Zhang and Keyou You},
journal={ArXiv},
year={2019},
volume={abs/1901.08215}
}
• Published 24 January 2019
• Computer Science
• ArXiv
A popular asynchronous protocol for decentralized optimization is randomized gossip where a pair of neighbors concurrently update via pairwise averaging. In practice, this creates deadlocks and is vulnerable to information delays. It can also be problematic if a node is unable to response or has only access to its private-preserved local dataset. To address these issues simultaneously, this paper proposes an asynchronous decentralized algorithm, i.e. APPG, with {\em directed} communication…

## Figures from this paper

• Computer Science
IEEE Transactions on Automatic Control
• 2020
This paper proposes a novel exact asynchronous subgradient-push algorithm (AsySPA) to solve an additive cost optimization problem over digraphs where each node only has access to a local convex
• Computer Science
IEEE Journal on Selected Areas in Information Theory
• 2021
Considering communication and synchronization costs are the major bottlenecks for decentralized optimization, this paper attempts to reduce these costs from an algorithmic design aspect and is able to reduce the number of agents involved in one round of update via randomization.
• Computer Science
IEEE Transactions on Signal Processing
• 2020
This paper proposes a distributed dual gradient tracking algorithm (DDGT) to solve resource allocation problems over an unbalanced network, where each node in the network holds a private cost
• Computer Science
ArXiv
• 2019
This paper proposes a decentralized stochastic gradient tracking algorithm over peer-to-peer networks for empirical risk minimization problems, and explicitly evaluates its convergence rate in terms of key parameters of the problem, e.g., algebraic connectivity of the communication network, mini-batch size, and gradient variance.
• Computer Science
• 2019
It is proved in theory that Moniqua communicates a provably bounded number of bits per iteration, while converging at the same asymptotic rate as the original algorithm does with full-precision communication, which is faster with respect to wall clock time than other quantized decentralized algorithms.
• Computer Science
ArXiv
• 2020
A tight lower bound on the iteration complexity for such methods in a stochastic non-convex setting is provided and DeFacto is proposed, a class of algorithms that converge at the optimal rate without additional theoretical assumptions.
• Computer Science
ICML
• 2021
DeTAG is proposed, a practical gossip-style decentralized algorithm that achieves the lower bound with only a logarithm gap, and it is shown DeTAG enjoys faster convergence compared to baselines, especially on unshuffled data and in sparse networks.
• Computer Science
ICML
• 2020
It is proved in theory that Moniqua communicates a provably bounded number of bits per iteration, while converging at the same asymptotic rate as the original algorithm does with full-precision communication.

## References

SHOWING 1-10 OF 34 REFERENCES

• Computer Science
IEEE Transactions on Signal and Information Processing over Networks
• 2016
An asynchronous, decentralized algorithm for consensus optimization that involves both primal and dual variables, uses fixed step-size parameters, and provably converges to the exact solution under a random agent assumption and both bounded and unbounded delay assumptions.
• Computer Science
IEEE Transactions on Automatic Control
• 2020
This paper proposes a novel exact asynchronous subgradient-push algorithm (AsySPA) to solve an additive cost optimization problem over digraphs where each node only has access to a local convex
• Computer Science, Mathematics
52nd IEEE Conference on Decision and Control
• 2013
This work develops a broadcast-based algorithm, termed the subgradient-push, which steers every node to an optimal value under a standard assumption of subgradient boundedness, which converges at a rate of O (ln t/√t), where the constant depends on the initial values at the nodes, the sub gradient norms, and, more interestingly, on both the consensus speed and the imbalances of influence among the nodes.
• Computer Science
Proceedings of the IEEE
• 2018
This paper presents an overview of recent work in decentralized optimization and surveys the state-of-theart algorithms and their analyses tailored to these different scenarios, highlighting the role of the network topology.
• Computer Science
2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
• 2012
The experiments illustrate the benefits of using asynchronous consensus-based distributed optimization when some nodes are unreliable and may fail or when messages experience time-varying delays.
• Computer Science
ICML
• 2017
The efficiency of MSDA against state-of-the-art methods for two problems: least-squares regression and classification by logistic regression is verified.
• Computer Science
2018 IEEE Conference on Decision and Control (CDC)
• 2018
The method unifies the algorithms with different types of distributed architecture, including decentralized (peer-to-peer), centralized (master-slave), and semi-centralized (leader-follower) architecture and converges linearly for strongly convex and smooth objective functions over a directed static network.
• Computer Science
SIAM J. Optim.
• 2015
A novel decentralized exact first-order algorithm (abbreviated as EXTRA) to solve the consensus optimization problem and uses a fixed, large step size, which can be determined independently of the network size or topology.
• Computer Science, Mathematics
IEEE Transactions on Automatic Control
• 2021
Numerical experiments demonstrate that asynchronous gradient push can minimize the global objective faster than the state-of-the-art synchronous first-order methods, is more robust to failing or stalling agents, and scales better with the network size.
• Mathematics, Computer Science
SIAM J. Optim.
• 2017
This paper introduces a distributed algorithm, referred to as DIGing, based on a combination of a distributed inexact gradient method and a gradient tracking technique that converges to a global and consensual minimizer over time-varying graphs.