A Survey on Gradient Inversion: Attacks, Defenses and Future Directions

@inproceedings{Zhang2022ASO,
  title={A Survey on Gradient Inversion: Attacks, Defenses and Future Directions},
  author={Rui Zhang and Song Guo and Junxiao Wang and Xin Xie and Dacheng Tao},
  booktitle={IJCAI},
  year={2022}
}
Recent studies have shown that the training samples can be recovered from gradients, which are called Gradient Inversion (GradInv) attacks. However, there remains a lack of extensive surveys covering recent advances and thorough analysis of this issue. In this paper, we present a comprehensive survey on GradInv, aiming to summarize the cutting-edge research and broaden the horizons for different domains. Firstly, we propose a taxonomy of GradInv attacks by characterizing existing attacks into… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 59 REFERENCES
Evaluating Gradient Inversion Attacks and Defenses in Federated Learning
TLDR
This paper evaluates existing attacks and defenses against gradient inversion attacks, and suggests that the state-of-the-art attacks can currently be defended against with minor data utility loss.
Deep Leakage from Gradients
TLDR
This work shows that it is possible to obtain the private training data from the publicly shared gradients, and names this leakage as Deep Leakage from Gradient and empirically validate the effectiveness on both computer vision and natural language processing tasks.
R-GAP: Recursive Gradient Attack on Privacy
TLDR
This research provides a closed-form recursive procedure to recover data from gradients in deep neural networks and proposes a rank analysis method, which can be used to estimate a network architecture's risk of a gradient attack.
See through Gradients: Image Batch Recovery via GradInversion
TLDR
It is shown that gradients encode a surprisingly large amount of information, such that all the individual images can be recovered with high fidelity via GradInversion, even for complex datasets, deep networks, and large batch sizes.
APRIL: Finding the Achilles' Heel on Privacy for Vision Transformers
TLDR
This paper analyzes the gradient leakage risk of self-attention based mechanism in both theoretical and practical situations, and proposes APRIL - A ttention PRI vacy L eakage, which poses a strong threat to self-Attention inspired models such as ViT.
Revealing and Protecting Labels in Distributed Training
TLDR
This work proposes a method to discover the set of labels of training samples from only the gradient of the last layer and the id to label mapping, and demonstrates the effectiveness of this method for model training in two domains - image classification, and automatic speech recognition.
iDLG: Improved Deep Leakage from Gradients
TLDR
This paper finds that sharing gradients definitely leaks the ground-truth labels and proposes a simple but reliable approach to extract accurate data from the gradients, which is valid for any differentiable model trained with cross-entropy loss over one-hot labels and is named Improved DLG (iDLG).
Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
TLDR
Inspired by simple geometric intuition, an inference principle is developed, named mixup inference (MI), for mixup-trained models, which can further improve the adversarial robustness for the models trained by mixup and its variants.
PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization
TLDR
A new low-rank gradient compressor based on power iteration that can compress gradients rapidly, efficiently aggregate the compressed gradients using all-reduce, and achieve test performance on par with SGD is proposed.
SAPAG: A Self-Adaptive Privacy Attack From Gradients
TLDR
This paper proposes a more general privacy attack from gradient, SAPAG, which uses a Gaussian kernel based of gradient difference as a distance measure, and demonstrates that SAPAG can construct the training data on different DNNs with different weight initializations and onDNNs in any training phases.
...
...