PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage

  title={PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage},
  author={Daniel Scheliga and Patrick M{\"a}der and Marco Seeland},
  journal={2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
Collaborative training of neural networks leverages distributed data by exchanging gradient information between different clients. Although training data entirely resides with the clients, recent work shows that training data can be reconstructed from such exchanged gradient information. To enhance privacy, gradient perturbation techniques have been proposed. However, they come at the cost of reduced model performance, increased convergence time, or increased data demand. In this paper, we… 

Combining Variational Modeling with Partial Gradient Perturbation to Prevent Deep Gradient Leakage

It is shown that variational modeling induces stochasticity on PRECODE’s and its subsequent layers’ gradients that prevents gradient attacks from convergence, and that the approach requires less gradient perturbation to effectively preserve privacy without harming model performance.

Dropout is NOT All You Need to Prevent Gradient Leakage

It is argued that privacy inducing changes to model architectures alone cannot be assumed to reliably protect from gradient leakage and therefore should be combined with complementary defense mechanisms.

Recover User’s Private Training Image Data by Gradient in Federated Learning

This study proposes one privacy attack system, i.e., Single-Sample Reconstruction Attack System (SSRAS), which can carry out image reconstruction regardless of whether the label can be determined and introduces Rank Analysis Index (RA-I) to measure the possible ofWhether the user’s raw image data can be reconstructed.

Directional Privacy for Deep Learning

Directional privacy is applied, via a mechanism based on the von Mises-Fisher (VMF) distribution, to perturb gradients in terms of angular distance so that gradient direction is broadly preserved.

Gradient Obfuscation Gives a False Sense of Security in Federated Learning

It is shown that commonly adopted gradient postprocessing procedures, such as gradient quantization, gradient sparsification, and gradient perturbation, may give a false sense of security in federated learning and argued that privacy enhancement should not be treated as a byproduct of gradient compression.

Defense against Privacy Leakage in Federated Learning

This paper presents a straightforward yet effective defense strategy based on obfuscating the gradients of sensitive data with concealing data using a gradient projection technique, which offers the highest level of protection while preserving FL performance.

Bayesian Framework for Gradient Leakage

This work proposes a theoretical framework that enables, for the first time, analysis of the Bayes optimal adversary phrased as an optimization problem and demonstrates that existing leakage attacks can be seen as approximations of this optimal adversary with different assumptions on the probability distributions of the input data and gradients.

LAMP: Extracting Text from Gradients with Language Model Priors

This work proposes LAMP, a novel attack tailored to textual data, that successfully reconstructs original text from gradients, and is the first to recover inputs from batch sizes larger than 1 for textual models.

Gradient Leakage Defense with Key-Lock Module for Federated Learning

The theoretical underpinnings of why gradients can leak private information are discussed and theoretical proof of the method's effectiveness is provided, demonstrating the robustness of the proposed approach in both maintaining model performance and defending against gradient leakage attacks.

Data Leakage in Federated Averaging

A new optimization-based attack is proposed which successfully attacks FedAvg by solving the optimization problem using automatic differentiation that forces a simulation of the client's update that generates the unobserved parameters for the recovered labels and inputs to match the received client update.

SAPAG: A Self-Adaptive Privacy Attack From Gradients

This paper proposes a more general privacy attack from gradient, SAPAG, which uses a Gaussian kernel based of gradient difference as a distance measure, and demonstrates that SAPAG can construct the training data on different DNNs with different weight initializations and onDNNs in any training phases.

R-GAP: Recursive Gradient Attack on Privacy

This research provides a closed-form recursive procedure to recover data from gradients in deep neural networks and proposes a rank analysis method, which can be used to estimate a network architecture's risk of a gradient attack.

Theory-Oriented Deep Leakage from Gradients via Linear Equation Solver

This paper proposes a novel data reconstruction attack on fully-connected neural networks and extends the attack to more commercial convolutional neural network architectures and proves the existence of exclusively activated neurons is critical to the separability of the activation patterns of different samples.

Inverting Gradients - How easy is it to break privacy in federated learning?

It is shown that is is actually possible to faithfully reconstruct images at high resolution from the knowledge of their parameter gradients, and it is demonstrated that such a break of privacy is possible even for trained deep networks.

iDLG: Improved Deep Leakage from Gradients

This paper finds that sharing gradients definitely leaks the ground-truth labels and proposes a simple but reliable approach to extract accurate data from the gradients, which is valid for any differentiable model trained with cross-entropy loss over one-hot labels and is named Improved DLG (iDLG).

Deep Leakage from Gradients

This work shows that it is possible to obtain the private training data from the publicly shared gradients, and names this leakage as Deep Leakage from Gradient and empirically validate the effectiveness on both computer vision and natural language processing tasks.

Communication-Efficient Learning of Deep Networks from Decentralized Data

This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.

Privacy-Preserving Deep Learning via Additively Homomorphic Encryption

This work revisits the previous work by Shokri and Shmatikov (ACM CCS 2015) and builds an enhanced system with the following properties: no information is leaked to the server and accuracy is kept intact, compared with that of the ordinary deep learning system also over the combined dataset.

A Framework for Evaluating Gradient Leakage Attacks in Federated Learning

This paper provides formal and experimental analysis to show how adversaries can reconstruct the private local training data by simply analyzing the shared parameter update from local training and measures, evaluates, and analyzes the effectiveness of client privacy leakage attacks under different gradient compression ratios when using communication efficient FL protocols.

Learning Differentially Private Recurrent Language Models

This work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent and adds user-level privacy protection to the federated averaging algorithm, which makes "large step" updates from user- level data.