• Corpus ID: 235368237

Asynchronous Distributed Optimization with Redundancy in Cost Functions

@article{Liu2021AsynchronousDO,
  title={Asynchronous Distributed Optimization with Redundancy in Cost Functions},
  author={Shuo Liu and Nirupam Gupta and Nitin H. Vaidya},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.03998}
}
This paper considers the problem of asynchronous distributed multi-agent optimization on server-based system architecture. In this problem, each agent has a local cost, and the goal for the agents is to collectively find a minimum of their aggregate cost. A standard algorithm to solve this problem is the iterative distributed gradient-descent (DGD) method being implemented collaboratively by the server and the agents. In the synchronous setting, the algorithm proceeds from one iteration to the… 

Figures from this paper

Utilizing Redundancy in Cost Functions for Resilience in Distributed Optimization and Learning
TLDR
A proposed redundancy model is proposed by proposing a way to model the agents’ cost functions by the generic notion of (f, r; )-redundancy where f and r are the parameters of Byzantine failures and asynchrony, respectively, and characterizes the closeness between agents' cost functions.
A Survey on Fault-tolerance in Distributed Optimization and Machine Learning
TLDR
This survey investigates the current state of fault-Tolerance research in distributed optimization, and aims to provide an overview of the existing studies on both fault-tolerant distributed optimization theories and applicable algorithms.
Redundancy in cost functions for Byzantine fault-tolerant federated learning
TLDR
This paper summarizes the recent results on server-based Byzantine fault-tolerant distributed optimization with applicability to resilience in federated learning and characterize redundancies in agents' cost functions that are necessary and sufficient for provable Byzantine resilience in distributed optimization.

References

SHOWING 1-10 OF 57 REFERENCES
Fault-Tolerant Multi-Agent Optimization: Optimal Iterative Distributed Algorithms
TLDR
This paper presents an iterative distributed algorithm that achieves optimal fault-tolerance, and ensures that at least |N|-f agents have weights that are bounded away from 0 (in particular, lower bounded by 1/2|N |-f}).
Distributed Asynchronous Constrained Stochastic Optimization
  • K. Srivastava, A. Nedić
  • Mathematics, Computer Science
    IEEE Journal of Selected Topics in Signal Processing
  • 2011
TLDR
This paper studies two problems which often occur in various applications arising in wireless sensor networks, and provides a diminishing step size algorithm which guarantees asymptotic convergence of the consensus problem and the problem of cooperative solution to a convex optimization problem.
Fault-Tolerance in Distributed Optimization: The Case of Redundancy
TLDR
This paper considers the case when a certain number of agents may be Byzantine faulty, and proposes a distributed optimization algorithm that allows the non-faulty agents to obtain a minimum of their aggregate cost if the minimal redundancy property holds.
Distributed optimization for cooperative agents: application to formation flight
TLDR
A simple decentralized algorithm to solve optimization problems involving cooperative agents that solves the dual problem of an artificially decomposed version of the primal problem, replacing one large computationally intractable problem with many smaller tractable problems.
Advances in Asynchronous Parallel and Distributed Optimization
TLDR
This article reviews recent developments in the design and analysis of asynchronous optimization methods, covering both centralized methods, where all processors update a master copy of the optimization variables, and decentralized methods,where each processor maintains a local copy ofThe analysis provides insights into how the degree of asynchrony impacts convergence rates, especially in stochastic optimization methods.
An asynchronous mini-batch algorithm for regularized stochastic optimization
TLDR
This work proposes an asynchronous mini-batch algorithm for regularized stochastic optimization problems that eliminates idle waiting and allows workers to run at their maximal update rates and enjoys near-linear speedup if the number of workers is O(1/√ϵ).
Asynchronous distributed optimization using a randomized alternating direction method of multipliers
TLDR
A new class of random asynchronous distributed optimization methods that generalize the standard Alternating Direction Method of Multipliers to an asynchronous setting where isolated components of the network are activated in an uncoordinated fashion are introduced.
Distributed delayed stochastic optimization
TLDR
This work shows n-node architectures whose optimization error in stochastic problems-in spite of asynchronous delays-scales asymptotically as O(1/√nT) after T iterations, known to be optimal for a distributed system with n nodes even in the absence of delays.
Asynchronous Distributed Learning From Constraints
TLDR
The study shows that the (distributed) asynchronous method of multipliers (ASYMM) allows us to support scenarios where selected constraints can be locally stored in each computational node without being shared with the rest of the network, opening the road to further investigations into privacy-preserving LfC.
Distributed optimization in sensor networks
  • M. Rabbat, R. Nowak
  • Computer Science
    Third International Symposium on Information Processing in Sensor Networks, 2004. IPSN 2004
  • 2004
TLDR
This paper investigates a general class of distributed algorithms for "in-network" data processing, eliminating the need to transmit raw data to a central point, and shows that for a broad class of estimation problems the distributed algorithms converge to within an /spl epsi/-ball around the globally optimal value.
...
1
2
3
4
5
...