# Distributed gradient-based optimization in the presence of dependent aperiodic communication

@article{Redder2022DistributedGO, title={Distributed gradient-based optimization in the presence of dependent aperiodic communication}, author={Adrian Redder and Arunselvan Ramaswamy and Holger Karl}, journal={ArXiv}, year={2022}, volume={abs/2201.11343} }

Iterative distributed optimization algorithms involve multiple agents that communicate with each other, over time, in order to minimize/maximize a global objective. In the presence of unreliable communication networks, the Ageof-Information (AoI), which measures the freshness of data received, may be large and hence hinder algorithmic convergence. In this paper, we study the convergence of general distributed gradient-based optimization algorithms in the presence of communication that neither…

## References

SHOWING 1-10 OF 37 REFERENCES

Distributed optimization over time-varying networks with stochastic information delays

- Computer Science, MathematicsIEEE Transactions on Automatic Control
- 2021

This note presents a simple distributed gradient-based optimization framework and an associated algorithm, and presents an analysis wherein the objective function is such that its sample-gradient is merely locally Lipschitz continuous.

Optimization over time-varying networks with unbounded delays

- Computer Science, Mathematics
- 2019

An analysis wherein the objective function is such that its sample-gradient is merely locally Lipschitz continuous, which is the first analysis under such weak general network conditions and makes a significant technical contribution in terms of the allowed class of objective functions.

Distributed Stochastic Optimization over Time-Varying Noisy Network

- Computer ScienceArXiv
- 2020

This is the first work to analyze and derive convergence rates of optimization algorithm in noisy network optimization and shows that an optimal rate of $O(1/\sqrt{T})$ in nonsmooth convex optimization can be obtained for proposed methods under appropriate communication noise condition.

Asynchronous Distributed Optimization Over Lossy Networks via Relaxed ADMM: Stability and Linear Convergence

- Computer Science, MathematicsIEEE Transactions on Automatic Control
- 2021

This article addresses the problem proposing a modified version of the relaxed alternating direction method of multipliers, which corresponds to the Peaceman–Rachford splitting method applied to the dual, and proves the almost sure convergence of the proposed algorithm under general assumptions on the distribution of communication loss and node activation events.

Asymptotic Properties of Primal-Dual Algorithm for Distributed Stochastic Optimization over Random Networks with Imperfect Communications

- MathematicsSIAM J. Control. Optim.
- 2018

This paper studies a distributed stochastic optimization problem over random networks with imperfect communications subject to a global constraint, which is the intersection of local constraint sets assigned to agents using the augmented Lagrange technique with the projection method to solve the problem.

Distributed nonconvex constrained optimization over time-varying digraphs

- Computer Science, MathematicsMath. Program.
- 2019

This paper considers nonconvex distributed constrained optimization over networks, modeled as directed (possibly time-varying) graphs. We introduce the first algorithmic framework for the…

A Unified Theory of Decentralized SGD with Changing Topology and Local Updates

- Computer ScienceICML
- 2020

This paper introduces a unified convergence analysis that covers a large variety of decentralized SGD methods which so far have required different intuitions, have different applications, and which have been developed separately in various communities.

ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks

- Computer ScienceICML
- 2021

ADOM uses a dual oracle, i.e., it assumes access to the gradient of the Fenchel conjugate of the individual loss functions, and its communication complexity is the same as that of accelerated Nesterov gradient method (Nesterov, 2003).

A Distributed Algorithm for Resource Allocation Over Dynamic Digraphs

- MathematicsIEEE Transactions on Signal Processing
- 2017

It is shown that the proposed distributed algorithm converges to the global minimizer provided that the time-varying digraph is jointly strongly connected.

Asymptotic Convergence of Deep Multi-Agent Actor-Critic Algorithms

- Computer Science
- 2022

The analysis shows that multi-agent DDPG using neural networks to approximate the local policies and critics converge to limits with the following properties: the critic limits minimize the average squared Bellman loss; the actor limits parameterize a policy that maximizes the local critic’s approximation of Qi, where i is the agent index.