Dropout is NOT All You Need to Prevent Gradient Leakage

@article{Scheliga2022DropoutIN,
  title={Dropout is NOT All You Need to Prevent Gradient Leakage},
  author={Daniel Scheliga and Patrick M{\"a}der and Marco Seeland},
  journal={ArXiv},
  year={2022},
  volume={abs/2208.06163}
}
Gradient inversion attacks on federated learning systems re- construct client training data from exchanged gradient information. To defend against such attacks, a variety of defense mechanisms were proposed. However, they usually lead to an unacceptable trade-off between privacy and model utility. Re- cent observations suggest that dropout could mitigate gradient leakage and improve model utility if added to neural net- works. Unfortunately, this phenomenon has not been systematically… 

References

SHOWING 1-10 OF 48 REFERENCES

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.

A Framework for Evaluating Gradient Leakage Attacks in Federated Learning

This paper provides formal and experimental analysis to show how adversaries can reconstruct the private local training data by simply analyzing the shared parameter update from local training and measures, evaluates, and analyzes the effectiveness of client privacy leakage attacks under different gradient compression ratios when using communication efficient FL protocols.

Inverting Gradients - How easy is it to break privacy in federated learning?

It is shown that is is actually possible to faithfully reconstruct images at high resolution from the knowledge of their parameter gradients, and it is demonstrated that such a break of privacy is possible even for trained deep networks.

The Unreasonable Effectiveness of Deep Features as a Perceptual Metric

A new dataset of human perceptual similarity judgments is introduced and it is found that deep features outperform all previous metrics by large margins on this dataset, and suggests that perceptual similarity is an emergent property shared across deep visual representations.

Adam: A Method for Stochastic Optimization

This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.

APRIL: Finding the Achilles

  • Heel on Privacy for Vision Transformers
  • 2021

A Survey on Gradient Inversion: Attacks, Defenses and Future Directions

A comprehensive survey on GradInv is presented, aiming to summarize the cutting-edge research and broaden the horizons for different domains, and proposes a taxonomy of GradInv attacks by characterizing existing attacks into two paradigms: iteration- and recursion-based attacks.

Recovering Private Text in Federated Learning of Language Models

This paper presents a novel attack method FILM for federated learning of language models (LMs) and shows the feasibility of recovering text from large batch sizes of up to 128 sentences, and evaluates three defense methods: gradient pruning, DPSGD, and a simple approach to freeze word embeddings that are proposed.

Catastrophic Data Leakage in Vertical Federated Learning

Dropout against Deep Leakage from Gradients

This paper proposes using an additional dropout (Srivastava et al. [2014]) layer before feeding the data to the classifier, very effective in preventing leakage of raw data, as the training data cannot converge to a small RMSE even after 5,800 epochs with dropout rate set to 0.5.