PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage
@article{Scheliga2021PRECODEA, title={PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage}, author={Daniel Scheliga and Patrick M{\"a}der and Marco Seeland}, journal={2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, year={2021}, pages={3605-3614} }
Collaborative training of neural networks leverages distributed data by exchanging gradient information between different clients. Although training data entirely resides with the clients, recent work shows that training data can be reconstructed from such exchanged gradient information. To enhance privacy, gradient perturbation techniques have been proposed. However, they come at the cost of reduced model performance, increased convergence time, or increased data demand. In this paper, we…
Figures and Tables from this paper
11 Citations
Combining Variational Modeling with Partial Gradient Perturbation to Prevent Deep Gradient Leakage
- Computer ScienceArXiv
- 2022
It is shown that variational modeling induces stochasticity on PRECODE’s and its subsequent layers’ gradients that prevents gradient attacks from convergence, and that the approach requires less gradient perturbation to effectively preserve privacy without harming model performance.
Dropout is NOT All You Need to Prevent Gradient Leakage
- Computer ScienceArXiv
- 2022
It is argued that privacy inducing changes to model architectures alone cannot be assumed to reliably protect from gradient leakage and therefore should be combined with complementary defense mechanisms.
Recover User’s Private Training Image Data by Gradient in Federated Learning
- Computer ScienceSensors
- 2022
This study proposes one privacy attack system, i.e., Single-Sample Reconstruction Attack System (SSRAS), which can carry out image reconstruction regardless of whether the label can be determined and introduces Rank Analysis Index (RA-I) to measure the possible ofWhether the user’s raw image data can be reconstructed.
Directional Privacy for Deep Learning
- Computer ScienceArXiv
- 2022
Directional privacy is applied, via a mechanism based on the von Mises-Fisher (VMF) distribution, to perturb gradients in terms of angular distance so that gradient direction is broadly preserved.
Gradient Obfuscation Gives a False Sense of Security in Federated Learning
- Computer ScienceArXiv
- 2022
It is shown that commonly adopted gradient postprocessing procedures, such as gradient quantization, gradient sparsification, and gradient perturbation, may give a false sense of security in federated learning and argued that privacy enhancement should not be treated as a byproduct of gradient compression.
Defense against Privacy Leakage in Federated Learning
- Computer ScienceArXiv
- 2022
This paper presents a straightforward yet effective defense strategy based on obfuscating the gradients of sensitive data with concealing data using a gradient projection technique, which offers the highest level of protection while preserving FL performance.
Bayesian Framework for Gradient Leakage
- Computer ScienceICLR
- 2022
This work proposes a theoretical framework that enables, for the first time, analysis of the Bayes optimal adversary phrased as an optimization problem and demonstrates that existing leakage attacks can be seen as approximations of this optimal adversary with different assumptions on the probability distributions of the input data and gradients.
LAMP: Extracting Text from Gradients with Language Model Priors
- Computer ScienceNeurIPS
- 2022
This work proposes LAMP, a novel attack tailored to textual data, that successfully reconstructs original text from gradients, and is the first to recover inputs from batch sizes larger than 1 for textual models.
Gradient Leakage Defense with Key-Lock Module for Federated Learning
- Computer Science
- 2023
The theoretical underpinnings of why gradients can leak private information are discussed and theoretical proof of the method's effectiveness is provided, demonstrating the robustness of the proposed approach in both maintaining model performance and defending against gradient leakage attacks.
Data Leakage in Federated Averaging
- Computer ScienceTrans. Mach. Learn. Res.
- 2022
A new optimization-based attack is proposed which successfully attacks FedAvg by solving the optimization problem using automatic differentiation that forces a simulation of the client's update that generates the unobserved parameters for the recovered labels and inputs to match the received client update.
36 References
SAPAG: A Self-Adaptive Privacy Attack From Gradients
- Computer ScienceArXiv
- 2020
This paper proposes a more general privacy attack from gradient, SAPAG, which uses a Gaussian kernel based of gradient difference as a distance measure, and demonstrates that SAPAG can construct the training data on different DNNs with different weight initializations and onDNNs in any training phases.
R-GAP: Recursive Gradient Attack on Privacy
- Computer ScienceICLR
- 2021
This research provides a closed-form recursive procedure to recover data from gradients in deep neural networks and proposes a rank analysis method, which can be used to estimate a network architecture's risk of a gradient attack.
Theory-Oriented Deep Leakage from Gradients via Linear Equation Solver
- Computer ScienceArXiv
- 2020
This paper proposes a novel data reconstruction attack on fully-connected neural networks and extends the attack to more commercial convolutional neural network architectures and proves the existence of exclusively activated neurons is critical to the separability of the activation patterns of different samples.
Inverting Gradients - How easy is it to break privacy in federated learning?
- Computer ScienceNeurIPS
- 2020
It is shown that is is actually possible to faithfully reconstruct images at high resolution from the knowledge of their parameter gradients, and it is demonstrated that such a break of privacy is possible even for trained deep networks.
iDLG: Improved Deep Leakage from Gradients
- Computer ScienceArXiv
- 2020
This paper finds that sharing gradients definitely leaks the ground-truth labels and proposes a simple but reliable approach to extract accurate data from the gradients, which is valid for any differentiable model trained with cross-entropy loss over one-hot labels and is named Improved DLG (iDLG).
Deep Leakage from Gradients
- Computer ScienceNeurIPS
- 2019
This work shows that it is possible to obtain the private training data from the publicly shared gradients, and names this leakage as Deep Leakage from Gradient and empirically validate the effectiveness on both computer vision and natural language processing tasks.
Communication-Efficient Learning of Deep Networks from Decentralized Data
- Computer ScienceAISTATS
- 2017
This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.
Privacy-Preserving Deep Learning via Additively Homomorphic Encryption
- Computer Science, MathematicsIEEE Transactions on Information Forensics and Security
- 2018
This work revisits the previous work by Shokri and Shmatikov (ACM CCS 2015) and builds an enhanced system with the following properties: no information is leaked to the server and accuracy is kept intact, compared with that of the ordinary deep learning system also over the combined dataset.
A Framework for Evaluating Gradient Leakage Attacks in Federated Learning
- Computer ScienceArXiv
- 2020
This paper provides formal and experimental analysis to show how adversaries can reconstruct the private local training data by simply analyzing the shared parameter update from local training and measures, evaluates, and analyzes the effectiveness of client privacy leakage attacks under different gradient compression ratios when using communication efficient FL protocols.
Learning Differentially Private Recurrent Language Models
- Computer ScienceICLR
- 2018
This work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent and adds user-level privacy protection to the federated averaging algorithm, which makes "large step" updates from user- level data.