GRNN: Generative Regression Neural Network—A Data Leakage Attack for Federated Learning

@article{Ren2022GRNNGR,
  title={GRNN: Generative Regression Neural Network—A Data Leakage Attack for Federated Learning},
  author={Hanchi Ren and Jingjing Deng and Xianghua Xie},
  journal={ACM Transactions on Intelligent Systems and Technology (TIST)},
  year={2022},
  volume={13},
  pages={1 - 24}
}
Data privacy has become an increasingly important issue in Machine Learning (ML), where many approaches have been developed to tackle this challenge, e.g., cryptography (Homomorphic Encryption (HE), Differential Privacy (DP)) and collaborative training (Secure Multi-Party Computation (MPC), Distributed Learning, and Federated Learning (FL)). These techniques have a particular focus on data encryption or secure local computation. They transfer the intermediate information to the third party to… 

Differential Privacy for Deep and Federated Learning: A Survey

This study reveals the gap between theory and application, accuracy, and robustness of DP, and illustrates all types of probability distributions that satisfy the DP mechanism, with their properties and use cases.

FedBoosting: Federated Learning with Gradient Protected Boosting for Text Recognition

This paper proposes a novel boosting algorithm for FL to address both the generalization and gradient leakage issues, as well as achieve faster convergence in gradient-based optimization and demonstrates the proposed Federated Boosting (FedBoosting) method achieves noticeable improvements in both prediction accuracy and run-time efficiency in a visual text recognition task on public benchmark.

Efficient Split Learning with Non-iid Data

  • Yuanqin CaiTongquan Wei
  • Computer Science
    2022 23rd IEEE International Conference on Mobile Data Management (MDM)
  • 2022
An efficient parallel split learning algorithm with a distillation loss function instead of parameter synchronization reduces the training time without losing the accuracy and an incentive mechanism based on Stackelberg Game is designed to adapt to the training environment with non-iid mobile data.

ReuseKNN: Neighborhood Reuse for Differentially-Private KNN-Based Recommendations

ReuseKNN is introduced, a novel differentially-private KNN-based recommender system that can substantially reduce the number of users that need to be protected with DP, while outperforming related approaches in terms of accuracy.

A New Dimensionality Reduction Method Based on Hensel's Compression for Privacy Protection in Federated Learning

A two layers privacy protection approach to overcome the limitations of the existing DP-based approaches and overcomes the problem of privacy leakage due to composition by applying DP only once before the training; clients train their local model on the privacy-preserving dataset generated by the second layer.

Training Mixed-Domain Translation Models via Federated Learning

This work demonstrates that with slight modifications in the training process, neural machine trans- lation (NMT) engines can be easily adapted when an FL-based aggregation is applied to fuse different domains and proposes a novel technique to dynamically control the communication band- width by selecting impactful parameters during FL updates.

PerFED-GAN: Personalized Federated Learning via Generative Adversarial Networks

A federated learning method based on cotraining and generative adversarial networks (GANs) that allows each client to design its own model to participate in federatedLearning training independently without sharing any model architecture or parameter information with other clients or a center is proposed.

Threats, attacks and defenses to federated learning: issues, taxonomy and perspectives

This work surveys the threats, attacks and defenses to FL throughout the whole process of FL in three phases, including Data and Behavior Auditing Phase, Training Phase and Predicting Phase, and highlights that establishing a trusted FL requires adequate measures to mitigate security and privacy threats at each phase.

References

SHOWING 1-10 OF 76 REFERENCES

The mnist database of handwritten digits

An improved articulated bar flail having shearing edges for efficiently shredding materials and an improved shredder cylinder with a plurality of these flails circumferentially spaced and pivotally attached to the periphery of a rotatable shaft are disclosed.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

Image Quality Metrics: PSNR vs. SSIM

  • A. HoréD. Ziou
  • Computer Science
    2010 20th International Conference on Pattern Recognition
  • 2010
A simple mathematical relationship is derived between the peak-signal-to-noise ratio and the structural similarity index measure which works for various kinds of image degradations such as Gaussian blur, additive Gaussian white noise, jpeg and jpeg2000 compression.

Labeled Faces in the Wild: A Database forStudying Face Recognition in Unconstrained Environments

The database contains labeled face photographs spanning the range of conditions typically encountered in everyday life, and exhibits “natural” variability in factors such as pose, lighting, race, accessories, occlusions, and background.

Learning Multiple Layers of Features from Tiny Images

It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.

Inverting Gradients - How easy is it to break privacy in federated learning?

It is shown that is is actually possible to faithfully reconstruct images at high resolution from the knowledge of their parameter gradients, and it is demonstrated that such a break of privacy is possible even for trained deep networks.

iDLG: Improved Deep Leakage from Gradients

This paper finds that sharing gradients definitely leaks the ground-truth labels and proposes a simple but reliable approach to extract accurate data from the gradients, which is valid for any differentiable model trained with cross-entropy loss over one-hot labels and is named Improved DLG (iDLG).

Deep Leakage from Gradients

This work shows that it is possible to obtain the private training data from the publicly shared gradients, and names this leakage as Deep Leakage from Gradient and empirically validate the effectiveness on both computer vision and natural language processing tasks.

A training algorithm for optimal margin classifiers

A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions,

Towards Personalized Federated Learning

This survey explores the domain of personalized FL (PFL) to address the fundamental challenges of FL on heterogeneous data, a universal characteristic inherent in all real-world datasets.
...