User-Level Label Leakage from Gradients in Federated Learning

@article{Wainakh2021UserLevelLL,
  title={User-Level Label Leakage from Gradients in Federated Learning},
  author={Aidmar Wainakh and Fabrizio G. Ventola and Till M{\"u}{\ss}ig and Jens Keim and Carlos Garcia Cordero and Ephraim Zimmer and Tim Grube and Kristian Kersting and M. M{\"u}hlh{\"a}user},
  journal={Proceedings on Privacy Enhancing Technologies},
  year={2021},
  volume={2022},
  pages={227 - 244}
}
Abstract Federated learning enables multiple users to build a joint model by sharing their model updates (gradients), while their raw data remains local on their devices. In contrast to the common belief that this provides privacy benefits, we here add to the very recent results on privacy risks when sharing gradients. Specifically, we investigate Label Leakage from Gradients (LLG), a novel attack to extract the labels of the users’ training data from their shared gradients. The attack exploits… 

Figures and Tables from this paper

Inferring Class-Label Distribution in Federated Learning

This paper studies the problem of class-label distribution inference from an adversarial perspective, based on model parameter updates sent to the parameter server, and introduces four new methods to estimate class- label distribution in the general FL setting.

MpFedcon : Model-Contrastive Personalized Federated Learning with the Class Center

This paper proposes a new personalized federated learning method named MpFedcon, which addresses the data heterogeneity problem and privacy leakage problem from global and local perspectives and yields effective resists on the label leakage problem.

Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups

This analysis is the first of its kind to reveal several research gaps with regard to the types and architectures of target models and identify fallacies in the evaluation of attacks which raise questions about the generalizability of the conclusions.

Using Highly Compressed Gradients in Federated Learning for Data Reconstruction Attacks

This study proposes an effective data reconstruction attack against highly compressed gradients, called highly compressed gradient leakage attack (HCGLA), and designs a novel dummy data initialization method, Init-Generation, to compensate for information loss caused by gradient compression.

OLIVE: Oblivious and Differentially Private Federated Learning on Trusted Execution Environment

Olive is proposed, a system that combines the merits of CDP-FL and L DP-FL by leveraging a Trusted Execution Environment (TEE) and works efficiently even when training a model with hundreds of thousands of parameters while ensuring full obliviousness, which brings secure FL closer to realization.

Privacy by Projection: Federated Population Density Estimation by Projecting on Random Features

A Federated KDE framework for estimating the user population density is proposed, which not only keeps location data on the devices but also provides probabilistic privacy guarantees against a malicious server that tries to infer users' location.

Privacy-Preserving Federated Recurrent Neural Networks

R HODE is the first system that provides the building blocks for the training of RNNs and its variants, under encryption in a federated learning setting, and it is proposed a novel packing scheme, multi-dimensional packing, for a better utilization of Single Instruction, Multiple Data operations under encryption.

References

SHOWING 1-10 OF 45 REFERENCES

Layer-wise Characterization of Latent Information Leakage in Federated Learning

Two new metrics are proposed that can localize the private information in each layer of a DNN and quantify the leakage depending on how sensitive the gradients are with respect to the latent information, and designed LatenTZ: a federated learning framework that lets the most sensitive layers to run in the clients' Trusted Execution Environments (TEE).

Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning

This paper gives the first attempt to explore user-level privacy leakage against the federated learning by the attack from a malicious server with a framework incorporating GAN with a multi-task discriminator, which simultaneously discriminates category, reality, and client identity of input samples.

iDLG: Improved Deep Leakage from Gradients

This paper finds that sharing gradients definitely leaks the ground-truth labels and proposes a simple but reliable approach to extract accurate data from the gradients, which is valid for any differentiable model trained with cross-entropy loss over one-hot labels and is named Improved DLG (iDLG).

A Framework for Evaluating Client Privacy Leakages in Federated Learning

This paper provides formal and experimental analysis to show how adversaries can reconstruct the private local training data by simply analyzing the shared parameter update from local training and measures, evaluates, and analyzes the effectiveness of client privacy leakage attacks under different gradient compression ratios when using communication efficient FL protocols.

R-GAP: Recursive Gradient Attack on Privacy

This research provides a closed-form recursive procedure to recover data from gradients in deep neural networks and proposes a rank analysis method, which can be used to estimate a network architecture's risk of a gradient attack.

Inverting Gradients - How easy is it to break privacy in federated learning?

It is shown that is is actually possible to faithfully reconstruct images at high resolution from the knowledge of their parameter gradients, and it is demonstrated that such a break of privacy is possible even for trained deep networks.

Fidel: Reconstructing Private Training Samples from Weight Updates in Federated Learning

It is shown how to recover on average twenty out of thirty private data samples from a client’s model update employing a fully connected neural network with very little computational resources required and over thirteen out of twenty samples can be recovered from a convolutional neural network update.

Label Leakage from Gradients in Distributed Machine Learning

This paper proposes LLG, an algorithm to disclose the labels of the users' training data from their shared gradients, and conducts an empirical analysis on two datasets to demonstrate the validity of the algorithm.

Label Leakage and Protection in Two-party Split Learning

This work formulate a realistic threat model and proposes a privacy loss metric to quantify label leakage in split learning, and shows that there exist two simple yetective methods within the threat model that can allow one party to accurately recover private ground-truth labels owned by the other party.

GAN Enhanced Membership Inference: A Passive Local Attack in Federated Learning

This paper points out a membership inference attack method that can cause a serious privacy leakage in federated learning and substantiates that this GAN enhanced membership inferenceattack method has a 98% attack accuracy.