• Corpus ID: 220487047

Attack of the Tails: Yes, You Really Can Backdoor Federated Learning

@article{Wang2020AttackOT,
  title={Attack of the Tails: Yes, You Really Can Backdoor Federated Learning},
  author={Hongyi Wang and Kartik K. Sreenivasan and Shashank Rajput and Harit Vishwakarma and Saurabh Agarwal and Jy-yong Sohn and Kangwook Lee and Dimitris Papailiopoulos},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.05084}
}
Due to its decentralized nature, Federated Learning (FL) lends itself to adversarial attacks in the form of backdoors during training. The goal of a backdoor is to corrupt the performance of the trained model on specific sub-tasks (e.g., by classifying green cars as frogs). A range of FL backdoor attacks have been introduced in the literature, but also methods to defend against them, and it is currently an open question whether FL systems can be tailored to be robust against backdoors. In this… 
PerDoor: Persistent Non-Uniform Backdoors in Federated Learning using Adversarial Perturbations
TLDR
PerDoor is proposed, a persistent-by-construction backdoor injection technique for FL, driven by adversarial perturbation and targeting parameters of the centralized model that deviate less in successive FL rounds and contribute the least to the main task accuracy.
BaFFLe: Backdoor Detection via Feedback-based Federated Learning
TLDR
This paper proposes a novel defense, dubbed BaFFLe---Backdoor detection via Feedback-based Federated Learning---to secure FL against backdoor attacks and shows that this powerful construct can achieve very high detection rates against state-of-the-art backdoor attacks, even when relying on straightforward methods to validate the model.
Backdoor Learning: A Survey
TLDR
This article summarizes and categorizes existing backdoor attacks and defenses based on their characteristics, and provides a unified framework for analyzing poisoning-based backdoor attacks, and summarizes widely adopted benchmark datasets.
FLGUARD: Secure and Private Federated Learning
TLDR
This work introduces FLGUARD, a poisoning defense framework that is able to defend FL against state-ofthe-art backdoor attacks while simultaneously maintaining the benign performance of the aggregated model and augments it with state-of-the-art secure computation techniques that securely evaluate theFLGUARD algorithm.
FLAME: Taming Backdoors in Federated Learning
TLDR
Evaluation of FLAME on several datasets stemming from application areas including image classification, word prediction, and IoT intrusion detection demonstrates that FLAME removes backdoors effectively with a negligible impact on the benign performance of the models.
Neurotoxin: Durable Backdoors in Federated Learning
TLDR
Neurotoxin is proposed, a simple one-line modification to existing backdoor attacks that acts by attacking parameters that are changed less in magnitude during training, and it is found that it can double the durability of state of the art backdoors.
Excess Capacity and Backdoor Poisoning
TLDR
This work presents a formal theoretical framework within which one can discuss backdoor data poisoning attacks for classification problems and identifies a parameter the authors call the memorization capacity that captures the intrinsic vulnerability of a learning problem to a backdoor attack.
Certified Robustness for Free in Differentially Private Federated Learning
TLDR
This paper investigates both the user-level and instance-level privacy of DPFL, and proposes novel randomization mechanisms and analysis to achieve improved differential privacy, and proves their certified robustness under a bounded number of adversarial users or instances.
DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection
TLDR
The performance and effectiveness of DeepSight is evaluated and it is shown that it can mitigate state-of-the-art backdoor attacks with a negligible impact on the model’s performance on benign data.
DECK: Model Hardening for Defending Pervasive Backdoors
TLDR
This paper develops a general pervasive attack based on an encoder-decoder architecture enhanced with a special transformation layer that can reduce the attack success rate of six pervasive backdoor attacks from 99.06% to 1.94%, surpassing seven state-of-the-art backdoor removal techniques.
...
...

References

SHOWING 1-10 OF 105 REFERENCES
Can You Really Backdoor Federated Learning?
TLDR
This paper conducts a comprehensive study of backdoor attacks and defenses for the EMNIST dataset, a real-life, user-partitioned, and non-iid dataset, and shows that norm clipping and "weak'' differential privacy mitigate the attacks without hurting the overall performance.
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
TLDR
This work considers a new type of attacks, called backdoor attacks, where the attacker's goal is to create a backdoor into a learning-based authentication system, so that he can easily circumvent the system by leveraging the backdoor.
A Little Is Enough: Circumventing Defenses For Distributed Learning
TLDR
It is shown that 20% of corrupt workers are sufficient to degrade a CIFAR10 model accuracy by 50%, as well as to introduce backdoors into MNIST and CIFar10 models without hurting their accuracy.
How To Backdoor Federated Learning
TLDR
This work designs and evaluates a new model-poisoning methodology based on model replacement and demonstrates that any participant in federated learning can introduce hidden backdoor functionality into the joint global model, e.g., to ensure that an image classifier assigns an attacker-chosen label to images with certain features.
Analyzing Federated Learning through an Adversarial Lens
TLDR
This work explores the threat of model poisoning attacks on federated learning initiated by a single, non-colluding malicious agent where the adversarial objective is to cause the model to misclassify a set of chosen inputs with high confidence.
Poisoning Attack in Federated Learning using Generative Adversarial Nets
  • Jiale Zhang, Junjun Chen, Di Wu, Bing Chen, Shui Yu
  • Computer Science
    2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE)
  • 2019
TLDR
This work study and evaluate a poisoning attack in federated learning system based on generative adversarial nets (GAN), where an attacker first acts as a benign participant and stealthily trains a GAN to mimic prototypical samples of the other participants' training set which does not belong to the attacker.
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent
TLDR
Krum is proposed, an aggregation rule that satisfies the resilience property of the aggregation rule capturing the basic requirements to guarantee convergence despite f Byzantine workers, which is argued to be the first provably Byzantine-resilient algorithm for distributed SGD.
DRACO: Byzantine-resilient Distributed Training via Redundant Gradients
TLDR
DRACO is presented, a scalable framework for robust distributed training that uses ideas from coding theory and comes with problem-independent robustness guarantees, and is shown to be several times, to orders of magnitude faster than median-based approaches.
Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
TLDR
This work presents the first robust and generalizable detection and mitigation system for DNN backdoor attacks, and identifies multiple mitigation techniques via input filters, neuron pruning and unlearning.
...
...