Corpus ID: 235446492

A Survey on Fault-tolerance in Distributed Optimization and Machine Learning

@article{Liu2021ASO,
  title={A Survey on Fault-tolerance in Distributed Optimization and Machine Learning},
  author={Shuo Liu},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.08545}
}
  • Shuo Liu
  • Published 2021
  • Computer Science
  • ArXiv
The robustness of distributed optimization is an emerging field of study, motivated by various applications of distributed optimization including distributed machine learning, distributed sensing, and swarm robotics. With the rapid expansion of the scale of distributed systems, resilient distributed algorithms for optimization are needed, in order to mitigate system failures, communication issues, or even malicious attacks. This survey investigates the current state of fault-tolerance research… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 137 REFERENCES
Fault-Tolerant Distributed Optimization (Part IV): Constrained Optimization with Arbitrary Directed Networks
TLDR
This report considers arbitrary directed communication networks and generalizes the previous results on fully-connected networks and unconstrained optimization to arbitrary directed networks and constrained optimization, and provides a matrix representation for iterative approximate crash consensus. Expand
Fault-Tolerant Multi-Agent Optimization: Optimal Iterative Distributed Algorithms
TLDR
This paper presents an iterative distributed algorithm that achieves optimal fault-tolerance, and ensures that at least |N|-f agents have weights that are bounded away from 0 (in particular, lower bounded by 1/2|N |-f}). Expand
A survey of distributed optimization
TLDR
This survey paper aims to offer a detailed overview of existing distributed optimization algorithms and their applications in power systems, and focuses on the application of distributed optimization in the optimal coordination of distributed energy resources. Expand
A Case of Distributed Optimization in Adversarial Environment
TLDR
This paper proposes a method to dwarf data injection attacks on distributed optimization algorithms, which is based on the idea that the malicious nodes tend to give themselves away when broadcasting messages with the intention to drive the consensus value away from the optimal point for the regular nodes in the network. Expand
Detection of Insider Attacks in Distributed Projected Subgradient Algorithms
TLDR
It is shown that a general neural network is particularly suitable for detecting and localizing the malicious agents, as they can effectively explore nonlinear relationship underlying the collected data, and one of the state-of-art approaches in federated learning is proposed, i.e., a collaborative peer-topeer machine learning protocol to facilitate training the neural network models by gossip exchanges. Expand
Consensus-based distributed optimization with malicious nodes
  • S. Sundaram, B. Gharesifard
  • Computer Science
  • 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)
  • 2015
TLDR
A robust consensus-based distributed optimization algorithm is proposed that is guaranteed to converge to the convex hull of the set of minimizers of the non-adversarial nodes' functions and shows that finding the largest size of such sets is NP-hard. Expand
Resilient Distributed Optimization Algorithm Against Adversarial Attacks
TLDR
This article proposes a novel resilient distributed optimization algorithm which exploits the trusted agents which cannot be compromised by adversarial attacks and form a connected dominating set in the original graph to constrain effects of adversarial attack. Expand
Convergence Rate of Distributed Averaging Dynamics and Optimization in Networks
  • A. Nedić
  • Computer Science
  • Found. Trends Syst. Control.
  • 2015
TLDR
This tutorial provides an overview of the convergence rate of distributed algorithms for coordination and its relevance to optimization in a system of autonomous agents embedded in a communication network, where each agent is aware of (and can communicate with) its local neighbors only. Expand
Fault-Tolerance in Distributed Optimization: The Case of Redundancy
TLDR
This paper considers the case when a certain number of agents may be Byzantine faulty, and proposes a distributed optimization algorithm that allows the non-faulty agents to obtain a minimum of their aggregate cost if the minimal redundancy property holds. Expand
Byzantine Fault Tolerant Distributed Linear Regression
TLDR
This paper considers the problem of Byzantine fault tolerance in distributed linear regression in a multi-agent system, and shows that the server can achieve this objective, in a deterministic manner, by robustifying the original distributed gradient descent method using norm based filters, namely 'norm filtering' and 'norm-cap filtering'. Expand
...
1
2
3
4
5
...