• Corpus ID: 231786691

Provably Secure Federated Learning against Malicious Clients

@inproceedings{Cao2021ProvablySF,
  title={Provably Secure Federated Learning against Malicious Clients},
  author={Xiaoyu Cao and Jinyuan Jia and Neil Zhenqiang Gong},
  booktitle={AAAI},
  year={2021}
}
Federated learning enables clients to collaboratively learn a shared global model without sharing their local training data with a cloud server. However, malicious clients can corrupt the global model to predict incorrect labels for testing examples. Existing defenses against malicious clients leverage Byzantine-robust federated learning methods. However, these methods cannot provably guarantee that the predicted label for a testing example is not affected by malicious clients. We bridge this… 

Figures and Tables from this paper

CRFL: Certifiably Robust Federated Learning against Backdoor Attacks
TLDR
This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors, and exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.
Certified Federated Adversarial Training
TLDR
This work model an attacker who poisons the model to insert a weakness into the adversarial training such that the model displays apparent adversarial robustness, while the attacker can exploit the inserted weakness to bypass the adversaria training and force the models to misclassify adversarial examples.
MANDERA: Malicious Node Detection in Federated Learning via Ranking
TLDR
It is proved, under mild conditions, that MANDERA is guaranteed to detect all malicious nodes under typical Byzantine attacks with no prior knowledge or history about the participating nodes.
SignGuard: Byzantine-robust Federated Learning through Collaborative Malicious Gradient Filtering
TLDR
It is shown that the element-wise sign of gradient vector can provide valuable insight in detecting model poisoning attacks and is proposed as a novel approach, SignGuard, to enable Byzantine-robust federated learning through collaborative malicious gradient filtering.
RVFR: ROBUST VERTICAL FEDERATED LEARNING
  • 2021
Vertical Federated Learning (VFL) is a distributed learning paradigm that allows multiple agents to jointly train a global model when each agent holds a different subset of features for the same
Faithful Edge Federated Learning: Scalability and Privacy
  • Meng Zhang, Ermin Wei, R. Berry
  • Computer Science
    IEEE Journal on Selected Areas in Communications
  • 2021
TLDR
This study designs a Faithful Federated Learning (FFL) mechanism which approximates the Vickrey–Clarke–Groves payments via an incremental computation and designs a scalable and Differentially Private FFL mechanism, the first differentially private faithful mechanism that maintains the economic properties.
DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection
TLDR
The performance and effectiveness of DeepSight is evaluated and it is shown that it can mitigate state-of-the-art backdoor attacks with a negligible impact on the model’s performance on benign data.
Deep Model Poisoning Attack on Federated Learning
TLDR
This paper performs systematic investigation for threats in federated learning and proposes a novel optimization-based model poisoning attack, which can not only achieve high attack success rate, but is also stealthy enough to bypass two existing defense methods.
Improving Security and Fairness in Federated Learning Systems
The ever-increasing use of Artificial Intelligence applications has made apparent that the quality of the training datasets affects the performance of the models. To this end, Federated Learning aims
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Federated Learning
TLDR
Contrary to the established belief, it is shown that FL, even without any defenses, is highly robust in practice, especially when simple defense mechanisms are used.
...
1
2
...

References

SHOWING 1-10 OF 49 REFERENCES
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
TLDR
This work performs the first systematic study on local model poisoning attacks to federated learning, assuming an attacker has compromised some client devices, and the attacker manipulates the local model parameters on the compromised client devices during the learning process such that the global model has a large testing error rate.
Robust Federated Learning via Collaborative Machine Teaching
TLDR
This study uses a few trusted instances provided by teachers as benign examples in the teaching process to propose a collaborative and privacy-preserving machine teaching method that produces directly a robust prediction model despite the extremely pervasive systematic data corruption.
How To Backdoor Federated Learning
TLDR
This work designs and evaluates a new model-poisoning methodology based on model replacement and demonstrates that any participant in federated learning can introduce hidden backdoor functionality into the joint global model, e.g., to ensure that an image classifier assigns an attacker-chosen label to images with certain features.
DBA: Distributed Backdoor Attacks against Federated Learning
TLDR
The distributed backdoor attack (DBA) is proposed --- a novel threat assessment framework developed by fully exploiting the distributed nature of FL that can evade two state-of-the-art robust FL algorithms against centralized backdoors.
Analyzing Federated Learning through an Adversarial Lens
TLDR
This work explores the threat of model poisoning attacks on federated learning initiated by a single, non-colluding malicious agent where the adversarial objective is to cause the model to misclassify a set of chosen inputs with high confidence.
Differentially Private Federated Learning: A Client Level Perspective
TLDR
The aim is to hide clients' contributions during training, balancing the trade-off between privacy loss and model performance, and empirical studies suggest that given a sufficiently large number of participating clients, this procedure can maintain client-level differential privacy at only a minor cost in model performance.
Agnostic Federated Learning
TLDR
This work proposes a new framework of agnostic federated learning, where the centralized model is optimized for any target distribution formed by a mixture of the client distributions, and shows that this framework naturally yields a notion of fairness.
Exploiting Unintended Feature Leakage in Collaborative Learning
TLDR
This work shows that an adversarial participant can infer the presence of exact data points -- for example, specific locations -- in others' training data and develops passive and active inference attacks to exploit this leakage.
Practical Secure Aggregation for Privacy-Preserving Machine Learning
TLDR
This protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner, and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network.
Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent
TLDR
Krum is proposed, an aggregation rule that satisfies the resilience property of the aggregation rule capturing the basic requirements to guarantee convergence despite f Byzantine workers, which is argued to be the first provably Byzantine-resilient algorithm for distributed SGD.
...
1
2
3
4
5
...