• Corpus ID: 208310035

Local Model Poisoning Attacks to Byzantine-Robust Federated Learning

@inproceedings{Fang2020LocalMP,
  title={Local Model Poisoning Attacks to Byzantine-Robust Federated Learning},
  author={Minghong Fang and Xiaoyu Cao and Jinyuan Jia and Neil Zhenqiang Gong},
  booktitle={USENIX Security Symposium},
  year={2020}
}
In federated learning, multiple client devices jointly learn a machine learning model: each client device maintains a local model for its local training dataset, while a master device maintains a global model via aggregating the local models from the client devices. The machine learning community recently proposed several federated learning methods that were claimed to be robust against Byzantine failures (e.g., system failures, adversarial manipulations) of certain client devices. In this work… 
Dynamic Federated Learning Model for Identifying Adversarial Clients
TLDR
A dynamic federated learning model is proposed that dynamically discards those adversarial clients, which allows to prevent the corruption of the global learning model.
ADVERSARIALLY ROBUST FEDERATED LEARNING FOR NEURAL NETWORKS
  • 2020
In federated learning, data is distributed among local clients which collaboratively train a prediction model using secure aggregation. To preserve the privacy of the clients, the federated learning
Mitigating Sybil Attacks on Differential Privacy based Federated Learning
TLDR
This work implements the first Sybil attacks on differential privacy based federated learning architectures and shows their impacts on model convergence, and proposes a method to keep monitoring the average loss of all participants in each round for convergence anomaly detection and defend the attacks based on the prediction cost reported from each client.
CONTRA: Defending Against Poisoning Attacks in Federated Learning
TLDR
This paper proposes a defense scheme named CONTRA to defend against poisoning attacks, e.g., label-flipping and backdoor attacks, in FL systems, and shows that CONTRA significantly reduces the attack success rate while achieving high accuracy with the global model.
Provably Secure Federated Learning against Malicious Clients
TLDR
This work shows that the label predicted by the ensemble global model for a testing example is provably not affected by a bounded number of malicious clients, and demonstrates that the derived bound is tight.
Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning
TLDR
This work designs a defense against poisoning of FL, called divide-and-conquer (DnC), and demonstrates that DnC outperforms all existing Byzantine-robust FL algorithms in defeating model poisoning attacks, specifically, it is 2.5× to 12× more resilient in the authors' experiments with different datasets and models.
The Limitations of Federated Learning in Sybil Settings
TLDR
This work considers the susceptibility of federated learning to sybil attacks and proposes a taxonomy of sybil objectives and strategies, and introduces a defense against targeted sybil-based poisoning called FoolsGold, which identifies sybils based on the diversity of client updates.
TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks
TLDR
TESSERACT is a defense against a directed deviation attack, a state-of-the-art model poisoning attack that uses the intuition that in a federated learning setting, certain patterns of gradient flips are indicative of an attack.
Towards Optimized Model Poisoning Attacks Against Federated Learning
Federated learning (FL) enables many data owners (e.g., mobile device owners) to train a joint ML model (e.g., a next-word prediction classifier) without the need of sharing their private training
Untargeted Poisoning Attack Detection in Federated Learning via Behavior Attestation
TLDR
AttestedFL is proposed, a defense mechanism that monitors the training of individual nodes through state persistence in order to detect a malicious worker and exposes an attacker’s malicious behavior and removes unreliable nodes from the aggregation process so that the FL process converge faster.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 71 REFERENCES
Mitigating Sybils in Federated Learning Poisoning
TLDR
FoolsGold is described, a novel defense to this problem that identifies poisoning sybils based on the diversity of client updates in the distributed learning process that exceeds the capabilities of existing state of the art approaches to countering sybil-based label-flipping and backdoor poisoning attacks.
Analyzing Federated Learning through an Adversarial Lens
TLDR
This work explores the threat of model poisoning attacks on federated learning initiated by a single, non-colluding malicious agent where the adversarial objective is to cause the model to misclassify a set of chosen inputs with high confidence.
How To Backdoor Federated Learning
TLDR
This work designs and evaluates a new model-poisoning methodology based on model replacement and demonstrates that any participant in federated learning can introduce hidden backdoor functionality into the joint global model, e.g., to ensure that an image classifier assigns an attacker-chosen label to images with certain features.
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
TLDR
This work considers a new type of attacks, called backdoor attacks, where the attacker's goal is to create a backdoor into a learning-based authentication system, so that he can easily circumvent the system by leveraging the backdoor.
Exploiting Unintended Feature Leakage in Collaborative Learning
TLDR
This work shows that an adversarial participant can infer the presence of exact data points -- for example, specific locations -- in others' training data and develops passive and active inference attacks to exploit this leakage.
When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
TLDR
StingRay is designed, a targeted poisoning attack that is broadly applicable---it is practical against 4 machine learning applications, which use 3 different learning algorithms, and it can bypass 2 existing defenses.
Distributed Statistical Machine Learning in Adversarial Settings: Byzantine Gradient Descent
  • Yudong Chen, Lili Su, Jiaming Xu
  • Computer Science
    Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems
  • 2018
TLDR
This paper proposes a simple variant of the classical gradient descent method and proves that the aggregated gradient, as a function of model parameter, converges uniformly to the true gradient function.
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
TLDR
A theoretically-grounded optimization framework specifically designed for linear regression and its effectiveness on a range of datasets and models is demonstrated and formal guarantees about its convergence and an upper bound on the effect of poisoning attacks when the defense is deployed are provided.
Practical Black-Box Attacks against Machine Learning
TLDR
This work introduces the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge, and finds that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.
Certified Defenses for Data Poisoning Attacks
TLDR
This work addresses the worst-case loss of a defense in the face of a determined attacker by constructing approximate upper bounds on the loss across a broad family of attacks, for defenders that first perform outlier removal followed by empirical risk minimization.
...
1
2
3
4
5
...