A Quantitative Metric for Privacy Leakage in Federated Learning

@article{Liu2021AQM,
  title={A Quantitative Metric for Privacy Leakage in Federated Learning},
  author={Y. Liu and Xinghua Zhu and Jianzong Wang and Jing Xiao},
  journal={ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  year={2021},
  pages={3065-3069}
}
  • Y. Liu, Xinghua Zhu, +1 author Jing Xiao
  • Published 2021
  • Computer Science
  • ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
In the federated learning system, parameter gradients are shared among participants and the central modulator, while the original data never leave their protected source domain. However, the gradient itself might carry enough information for precise inference of the original data. By reporting their parameter gradients to the central server, client datasets are exposed to inference attacks from adversaries. In this paper, we propose a quantitative metric based on mutual information for clients… Expand
1 Citations

Figures from this paper

Federated Learning with Dynamic Transformer for Text to Speech
TLDR
The federated dynamic transformer is proposed, which achieves faster and more stable convergence in the training phase and significantly reduces communication time, and is approaching centralize-trained Transformer-TTS when increasing clients number. Expand

References

SHOWING 1-10 OF 24 REFERENCES
FedSmart: An Auto Updating Federated Learning Optimization Mechanism
TLDR
A performance-based parameter return method for optimization is introduced, which optimizes different model for each client through sharing global gradients, and it extracts the data from each client as a local validation set, and the accuracy that model achieves in round t determines the weights of the next round. Expand
Empirical Studies of Institutional Federated Learning For Natural Language Processing
TLDR
This paper demonstrates federated training of a popular NLP model, TextCNN, with applications in sentence intent classification, and distinguishes from previous client-level privacy protection schemes, the proposed differentially private federated learning procedure is defined in the dataset sample level. Expand
ABY3: A Mixed Protocol Framework for Machine Learning
TLDR
A general framework for privacy-preserving machine learning is designed and implemented and used to obtain new solutions for training linear regression, logistic regression and neural network models and to design variants of each building block that are secure against malicious adversaries who deviate arbitrarily. Expand
Federated Learning: Strategies for Improving Communication Efficiency
TLDR
Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling. Expand
Deep Leakage from Gradients
TLDR
This work shows that it is possible to obtain the private training data from the publicly shared gradients, and names this leakage as Deep Leakage from Gradient and empirically validate the effectiveness on both computer vision and natural language processing tasks. Expand
Network Coding for Federated Learning Systems
TLDR
Optimizing the network structure of federated learning systems can reduce communication complexity by considering the correlation of the transmission channels. Expand
Federated Optimization: Distributed Machine Learning for On-Device Intelligence
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large numberExpand
Calibrating Noise to Sensitivity in Private Data Analysis
TLDR
The study is extended to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f, which is the amount that any single argument to f can change its output. Expand
Federated Learning of Unsegmented Chinese Text Recognition Model
TLDR
This paper applies federated learning with a deep convolutional network to perform variable-length text string recognition with a large corpus and shows that federated text recognition models can achieve similar or even higher accuracy than models trained on deep learning framework. Expand
Online Adaptative Curriculum Learning for GANs
TLDR
Experimental results show that the proposed novel framework for training the generator against an ensemble of discriminator networks improves samples quality and diversity over existing baselines by effectively learning a curriculum and supports the claim that weaker discriminators have higher entropy improving modes coverage. Expand
...
1
2
3
...