# Asynchronous Distributed Optimization with Redundancy in Cost Functions

@article{Liu2021AsynchronousDO, title={Asynchronous Distributed Optimization with Redundancy in Cost Functions}, author={Shuo Liu and Nirupam Gupta and Nitin H. Vaidya}, journal={ArXiv}, year={2021}, volume={abs/2106.03998} }

This paper considers the problem of asynchronous distributed multi-agent optimization on server-based system architecture. In this problem, each agent has a local cost, and the goal for the agents is to collectively find a minimum of their aggregate cost. A standard algorithm to solve this problem is the iterative distributed gradient-descent (DGD) method being implemented collaboratively by the server and the agents. In the synchronous setting, the algorithm proceeds from one iteration to the…

## 3 Citations

Utilizing Redundancy in Cost Functions for Resilience in Distributed Optimization and Learning

- Computer ScienceArXiv
- 2021

A proposed redundancy model is proposed by proposing a way to model the agents’ cost functions by the generic notion of (f, r; )-redundancy where f and r are the parameters of Byzantine failures and asynchrony, respectively, and characterizes the closeness between agents' cost functions.

A Survey on Fault-tolerance in Distributed Optimization and Machine Learning

- Computer ScienceArXiv
- 2021

This survey investigates the current state of fault-Tolerance research in distributed optimization, and aims to provide an overview of the existing studies on both fault-tolerant distributed optimization theories and applicable algorithms.

Redundancy in cost functions for Byzantine fault-tolerant federated learning

- Computer ScienceResilientFL
- 2021

This paper summarizes the recent results on server-based Byzantine fault-tolerant distributed optimization with applicability to resilience in federated learning and characterize redundancies in agents' cost functions that are necessary and sufficient for provable Byzantine resilience in distributed optimization.

## References

SHOWING 1-10 OF 57 REFERENCES

Fault-Tolerant Multi-Agent Optimization: Optimal Iterative Distributed Algorithms

- Computer SciencePODC
- 2016

This paper presents an iterative distributed algorithm that achieves optimal fault-tolerance, and ensures that at least |N|-f agents have weights that are bounded away from 0 (in particular, lower bounded by 1/2|N |-f}).

Distributed Asynchronous Constrained Stochastic Optimization

- Mathematics, Computer ScienceIEEE Journal of Selected Topics in Signal Processing
- 2011

This paper studies two problems which often occur in various applications arising in wireless sensor networks, and provides a diminishing step size algorithm which guarantees asymptotic convergence of the consensus problem and the problem of cooperative solution to a convex optimization problem.

Fault-Tolerance in Distributed Optimization: The Case of Redundancy

- Computer SciencePODC
- 2020

This paper considers the case when a certain number of agents may be Byzantine faulty, and proposes a distributed optimization algorithm that allows the non-faulty agents to obtain a minimum of their aggregate cost if the minimal redundancy property holds.

Distributed optimization for cooperative agents: application to formation flight

- Computer Science, Mathematics2004 43rd IEEE Conference on Decision and Control (CDC) (IEEE Cat. No.04CH37601)
- 2004

A simple decentralized algorithm to solve optimization problems involving cooperative agents that solves the dual problem of an artificially decomposed version of the primal problem, replacing one large computationally intractable problem with many smaller tractable problems.

Advances in Asynchronous Parallel and Distributed Optimization

- Computer Science, MathematicsProceedings of the IEEE
- 2020

This article reviews recent developments in the design and analysis of asynchronous optimization methods, covering both centralized methods, where all processors update a master copy of the optimization variables, and decentralized methods,where each processor maintains a local copy ofThe analysis provides insights into how the degree of asynchrony impacts convergence rates, especially in stochastic optimization methods.

An asynchronous mini-batch algorithm for regularized stochastic optimization

- Computer Science, Mathematics2015 54th IEEE Conference on Decision and Control (CDC)
- 2015

This work proposes an asynchronous mini-batch algorithm for regularized stochastic optimization problems that eliminates idle waiting and allows workers to run at their maximal update rates and enjoys near-linear speedup if the number of workers is O(1/√ϵ).

Asynchronous distributed optimization using a randomized alternating direction method of multipliers

- Computer Science, Mathematics52nd IEEE Conference on Decision and Control
- 2013

A new class of random asynchronous distributed optimization methods that generalize the standard Alternating Direction Method of Multipliers to an asynchronous setting where isolated components of the network are activated in an uncoordinated fashion are introduced.

Distributed delayed stochastic optimization

- Computer Science, Mathematics2012 IEEE 51st IEEE Conference on Decision and Control (CDC)
- 2012

This work shows n-node architectures whose optimization error in stochastic problems-in spite of asynchronous delays-scales asymptotically as O(1/√nT) after T iterations, known to be optimal for a distributed system with n nodes even in the absence of delays.

Asynchronous Distributed Learning From Constraints

- Computer Science, MathematicsIEEE Transactions on Neural Networks and Learning Systems
- 2020

The study shows that the (distributed) asynchronous method of multipliers (ASYMM) allows us to support scenarios where selected constraints can be locally stored in each computational node without being shared with the rest of the network, opening the road to further investigations into privacy-preserving LfC.

Distributed optimization in sensor networks

- Computer ScienceThird International Symposium on Information Processing in Sensor Networks, 2004. IPSN 2004
- 2004

This paper investigates a general class of distributed algorithms for "in-network" data processing, eliminating the need to transmit raw data to a central point, and shows that for a broad class of estimation problems the distributed algorithms converge to within an /spl epsi/-ball around the globally optimal value.