• Corpus ID: 221507774

ESMFL: Efficient and Secure Models for Federated Learning

@article{Lin2020ESMFLEA,
  title={ESMFL: Efficient and Secure Models for Federated Learning},
  author={Sheng Lin and Chenghong Wang and Hongjia Li and Jieren Deng and Yanzhi Wang and Caiwen Ding},
  journal={ArXiv},
  year={2020},
  volume={abs/2009.01867}
}
Deep Neural Networks are widely applied to various domains. The successful deployment of these applications is everywhere and it depends on the availability of big data. However, massive data collection required for deep neural network reveals the potential privacy issues and also consumes large mounts of communication bandwidth. To address this problem, we propose a privacy-preserving method for the federated learning distributed system, operated on Intel Software Guard Extensions, a set of… 

Figures and Tables from this paper

A Secure and Efficient Federated Learning Framework for NLP
TLDR
SEFL is proposed, a secure and efficient federated learning framework for NLP that eliminates the need for the trusted entities; achieves similar and even better model accuracy compared with existing FL designs; and is resilient to client dropouts.
Confidential Machine Learning Computation in Untrusted Environments: A Systems Security Perspective
TLDR
A systematic and comprehensive survey by classifying attack vectors and mitigation in confidential ML computation in untrusted environments, analyzing the complex security requirements in multi-party scenarios, and summarizing engineering challenges in confidentialML implementation is conducted.
TAG: Gradient Attack on Transformer-based Language Models
TLDR
This paper forms the gradient attack problem on the Transformer-based language models and proposes a gradient attack algorithm, TAG, to recover the local training data and shows that compared with DLG (Zhu et al., 2019), TAG works well on more weight distributions in recovering private training dataand is stronger than previous approaches on larger models, smaller dictionary size, and smaller input length.

References

SHOWING 1-10 OF 32 REFERENCES
Model Pruning Enables Efficient Federated Learning on Edge Devices
TLDR
PruneFL is proposed--a novel FL approach with adaptive and distributed parameter pruning, which adapts the model size during FL to reduce both communication and computation overhead and minimize the overall training time, while maintaining a similar accuracy as the original model.
Communication-Efficient Learning of Deep Networks from Decentralized Data
TLDR
This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.
Federated Learning: Strategies for Improving Communication Efficiency
TLDR
Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling.
Federated Optimization: Distributed Machine Learning for On-Device Intelligence
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number
Differentially Private Federated Learning: A Client Level Perspective
TLDR
The aim is to hide clients' contributions during training, balancing the trade-off between privacy loss and model performance, and empirical studies suggest that given a sufficiently large number of participating clients, this procedure can maintain client-level differential privacy at only a minor cost in model performance.
cpSGD: Communication-efficient and differentially-private distributed SGD
TLDR
This work extends and improves previous analysis of the Binomial mechanism showing that it achieves nearly the same utility as the Gaussian mechanism, while requiring fewer representation bits, which can be of independent interest.
Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
TLDR
This paper finds 99.9% of the gradient exchange in distributed SGD is redundant, and proposes Deep Gradient Compression (DGC) to greatly reduce the communication bandwidth, which enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributedTraining on mobile.
Dynamic Network Surgery for Efficient DNNs
TLDR
A novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning by proving that it outperforms the recent pruning method by considerable margins.
Deep Leakage from Gradients
TLDR
This work shows that it is possible to obtain the private training data from the publicly shared gradients, and names this leakage as Deep Leakage from Gradient and empirically validate the effectiveness on both computer vision and natural language processing tasks.
Federated Optimization: Distributed Optimization Beyond the Datacenter
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are distributed (unevenly) over an extremely large
...
...