Gradient-Leakage Resilient Federated Learning

@article{Wei2021GradientLeakageRF,
  title={Gradient-Leakage Resilient Federated Learning},
  author={Wenqi Wei and Ling Liu and Yanzhao Wu and Gong Su and Arun Iyengar},
  journal={2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)},
  year={2021},
  pages={797-807}
}
  • Wenqi WeiLing Liu A. Iyengar
  • Published 1 July 2021
  • Computer Science
  • 2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)
Federated learning(FL) is an emerging distributed learning paradigm with default client privacy because clients can keep sensitive data on their devices and only share local training parameter updates with the federated server. However, recent studies reveal that gradient leakages in FL may compromise the privacy of client training data. This paper presents a gradient leakage resilient approach to privacy-preserving federated learning with per training example-based client differential privacy… 

Figures and Tables from this paper

Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage

This work validate that the private training data can still be leaked under certain defense settings with a new type of leakage, i.e., Generative Gradient Leakage (GGL), and leverages the latent space of generative adversarial networks learned from public image datasets as a prior to compensate for the informational loss during gradient degradation.

Defense against Privacy Leakage in Federated Learning

This paper presents a straightforward yet effective defense strategy based on obfuscating the gradients of sensitive data with concealing data using a gradient projection technique, which offers the highest level of protection while preserving FL performance.

Gradient Obfuscation Gives a False Sense of Security in Federated Learning

It is shown how commonly adopted gradient postprocessing procedures, such as gradient quantization, gradient sparsification, and gradient perturbation may give a false sense of security in federated learning.

Security and Privacy Threats to Federated Learning: Issues, Methods, and Challenges

A unique classification of attacks for federated learning is constructed from the perspective of malicious threats based on different computing parties, highlighting the Deep Gradients Leakage attacks and Generative Adversarial Networks attacks.

Robust and Privacy-Preserving Collaborative Learning: A Comprehensive Survey

This survey aims to provide a systematic and comprehensive review of security and privacy researches in collaborative learning and provides the system overview of collaborative learning, followed by a brief introduction of integrity and privacy threats.

Assessing Anonymous and Selfish Free-rider Attacks in Federated Learning

This paper explores and defines two free-rider attack scenarios, anonymous and selfish free- rider attacks, and proposes two methods, namely novel and advanced methods, to construct these two attacks.

Bayesian Framework for Gradient Leakage

This work proposes a theoretical framework that enables, for the first time, analysis of the Bayes optimal adversary phrased as an optimization problem and demonstrates that existing leakage attacks can be seen as approximations of this optimal adversary with different assumptions on the probability distributions of the input data and gradients.

A Survey on Gradient Inversion: Attacks, Defenses and Future Directions

A comprehensive survey on GradInv is presented, aiming to summarize the cutting-edge research and broaden the horizons for different domains, and proposes a taxonomy of GradInv attacks by characterizing existing attacks into two paradigms: iteration- and recursion-based attacks.

Mixed Precision Quantization to Tackle Gradient Leakage Attacks in Federated Learning

A mixed-precision quantized FL scheme that can ensure more robustness as different layers of the deep model are quantized with different precision and quantization modes, and found a minimal accuracy drop in the global model after applying quantization.

References

SHOWING 1-10 OF 41 REFERENCES

A Framework for Evaluating Client Privacy Leakages in Federated Learning

This paper provides formal and experimental analysis to show how adversaries can reconstruct the private local training data by simply analyzing the shared parameter update from local training and measures, evaluates, and analyzes the effectiveness of client privacy leakage attacks under different gradient compression ratios when using communication efficient FL protocols.

Differentially Private Federated Learning: A Client Level Perspective

The aim is to hide clients' contributions during training, balancing the trade-off between privacy loss and model performance, and empirical studies suggest that given a sufficiently large number of participating clients, this procedure can maintain client-level differential privacy at only a minor cost in model performance.

Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning

This paper gives the first attempt to explore user-level privacy leakage against the federated learning by the attack from a malicious server with a framework incorporating GAN with a multi-task discriminator, which simultaneously discriminates category, reality, and client identity of input samples.

Inverting Gradients - How easy is it to break privacy in federated learning?

It is shown that is is actually possible to faithfully reconstruct images at high resolution from the knowledge of their parameter gradients, and it is demonstrated that such a break of privacy is possible even for trained deep networks.

Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks

It is shown that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset.

Practical Secure Aggregation for Privacy-Preserving Machine Learning

This protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner, and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network.

Exploiting Unintended Feature Leakage in Collaborative Learning

This work shows that an adversarial participant can infer the presence of exact data points -- for example, specific locations -- in others' training data and develops passive and active inference attacks to exploit this leakage.

Privacy-Preserving Deep Learning via Additively Homomorphic Encryption

This work revisits the previous work by Shokri and Shmatikov (ACM CCS 2015) and builds an enhanced system with the following properties: no information is leaked to the server and accuracy is kept intact, compared with that of the ordinary deep learning system also over the combined dataset.

Federated Learning: Strategies for Improving Communication Efficiency

Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling.

Scalable Private Learning with PATE

This work shows how PATE can scale to learning tasks with large numbers of output classes and uncurated, imbalanced training data with errors, and introduces new noisy aggregation mechanisms for teacher ensembles that are more selective and add less noise, and prove their tighter differential-privacy guarantees.