Asynchronous Decentralized Optimization in Directed Networks
@article{Zhang2019AsynchronousDO, title={Asynchronous Decentralized Optimization in Directed Networks}, author={Jiaqi Zhang and Keyou You}, journal={ArXiv}, year={2019}, volume={abs/1901.08215} }
A popular asynchronous protocol for decentralized optimization is randomized gossip where a pair of neighbors concurrently update via pairwise averaging. In practice, this creates deadlocks and is vulnerable to information delays. It can also be problematic if a node is unable to response or has only access to its private-preserved local dataset. To address these issues simultaneously, this paper proposes an asynchronous decentralized algorithm, i.e. APPG, with {\em directed} communication…
Figures from this paper
13 Citations
Asynchronous Policy Evaluation in Distributed Reinforcement Learning over Networks
- Computer ScienceAutom.
- 2022
AsySPA: An Exact Asynchronous Algorithm for Convex Optimization Over Digraphs
- Computer ScienceIEEE Transactions on Automatic Control
- 2020
This paper proposes a novel exact asynchronous subgradient-push algorithm (AsySPA) to solve an additive cost optimization problem over digraphs where each node only has access to a local convex…
Asynchronous Decentralized Accelerated Stochastic Gradient Descent
- Computer ScienceIEEE Journal on Selected Areas in Information Theory
- 2021
Considering communication and synchronization costs are the major bottlenecks for decentralized optimization, this paper attempts to reduce these costs from an algorithmic design aspect and is able to reduce the number of agents involved in one round of update via randomization.
Distributed Adaptive Newton Methods with Globally Superlinear Convergence
- Computer Science, MathematicsAutom.
- 2022
Distributed Dual Gradient Tracking for Resource Allocation in Unbalanced Networks
- Computer ScienceIEEE Transactions on Signal Processing
- 2020
This paper proposes a distributed dual gradient tracking algorithm (DDGT) to solve resource allocation problems over an unbalanced network, where each node in the network holds a private cost…
Decentralized Stochastic Gradient Tracking for Empirical Risk Minimization
- Computer ScienceArXiv
- 2019
This paper proposes a decentralized stochastic gradient tracking algorithm over peer-to-peer networks for empirical risk minimization problems, and explicitly evaluates its convergence rate in terms of key parameters of the problem, e.g., algebraic connectivity of the communication network, mini-batch size, and gradient variance.
MONIQUA: MODULO QUANTIZED COMMUNICATION
- Computer Science
- 2019
It is proved in theory that Moniqua communicates a provably bounded number of bits per iteration, while converging at the same asymptotic rate as the original algorithm does with full-precision communication, which is faster with respect to wall clock time than other quantized decentralized algorithms.
Towards Optimal Convergence Rate in Decentralized Stochastic Training
- Computer ScienceArXiv
- 2020
A tight lower bound on the iteration complexity for such methods in a stochastic non-convex setting is provided and DeFacto is proposed, a class of algorithms that converge at the optimal rate without additional theoretical assumptions.
Optimal Complexity in Decentralized Training
- Computer ScienceICML
- 2021
DeTAG is proposed, a practical gossip-style decentralized algorithm that achieves the lower bound with only a logarithm gap, and it is shown DeTAG enjoys faster convergence compared to baselines, especially on unshuffled data and in sparse networks.
Moniqua: Modulo Quantized Communication in Decentralized SGD
- Computer ScienceICML
- 2020
It is proved in theory that Moniqua communicates a provably bounded number of bits per iteration, while converging at the same asymptotic rate as the original algorithm does with full-precision communication.
References
SHOWING 1-10 OF 34 REFERENCES
Decentralized Consensus Optimization With Asynchrony and Delays
- Computer ScienceIEEE Transactions on Signal and Information Processing over Networks
- 2016
An asynchronous, decentralized algorithm for consensus optimization that involves both primal and dual variables, uses fixed step-size parameters, and provably converges to the exact solution under a random agent assumption and both bounded and unbounded delay assumptions.
AsySPA: An Exact Asynchronous Algorithm for Convex Optimization Over Digraphs
- Computer ScienceIEEE Transactions on Automatic Control
- 2020
This paper proposes a novel exact asynchronous subgradient-push algorithm (AsySPA) to solve an additive cost optimization problem over digraphs where each node only has access to a local convex…
Distributed optimization over time-varying directed graphs
- Computer Science, Mathematics52nd IEEE Conference on Decision and Control
- 2013
This work develops a broadcast-based algorithm, termed the subgradient-push, which steers every node to an optimal value under a standard assumption of subgradient boundedness, which converges at a rate of O (ln t/√t), where the constant depends on the initial values at the nodes, the sub gradient norms, and, more interestingly, on both the consensus speed and the imbalances of influence among the nodes.
Network Topology and Communication-Computation Tradeoffs in Decentralized Optimization
- Computer ScienceProceedings of the IEEE
- 2018
This paper presents an overview of recent work in decentralized optimization and surveys the state-of-theart algorithms and their analyses tailored to these different scenarios, highlighting the role of the network topology.
Consensus-based distributed optimization: Practical issues and applications in large-scale machine learning
- Computer Science2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
- 2012
The experiments illustrate the benefits of using asynchronous consensus-based distributed optimization when some nodes are unreliable and may fail or when messages experience time-varying delays.
Optimal Algorithms for Smooth and Strongly Convex Distributed Optimization in Networks
- Computer ScienceICML
- 2017
The efficiency of MSDA against state-of-the-art methods for two problems: least-squares regression and classification by logistic regression is verified.
A Push-Pull Gradient Method for Distributed Optimization in Networks
- Computer Science2018 IEEE Conference on Decision and Control (CDC)
- 2018
The method unifies the algorithms with different types of distributed architecture, including decentralized (peer-to-peer), centralized (master-slave), and semi-centralized (leader-follower) architecture and converges linearly for strongly convex and smooth objective functions over a directed static network.
EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Computer ScienceSIAM J. Optim.
- 2015
A novel decentralized exact first-order algorithm (abbreviated as EXTRA) to solve the consensus optimization problem and uses a fixed, large step size, which can be determined independently of the network size or topology.
Asynchronous Gradient Push
- Computer Science, MathematicsIEEE Transactions on Automatic Control
- 2021
Numerical experiments demonstrate that asynchronous gradient push can minimize the global objective faster than the state-of-the-art synchronous first-order methods, is more robust to failing or stalling agents, and scales better with the network size.
Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Mathematics, Computer ScienceSIAM J. Optim.
- 2017
This paper introduces a distributed algorithm, referred to as DIGing, based on a combination of a distributed inexact gradient method and a gradient tracking technique that converges to a global and consensual minimizer over time-varying graphs.