PPFL: privacy-preserving federated learning with trusted execution environments

@article{Mo2021PPFLPF,
  title={PPFL: privacy-preserving federated learning with trusted execution environments},
  author={Fan Mo and Hamed Haddadi and Kleomenis Katevas and Eduard Marin and Diego Perino and Nicolas Kourtellis},
  journal={Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services},
  year={2021}
}
  • Fan Mo, H. Haddadi, +3 authors Nicolas Kourtellis
  • Published 2021
  • Computer Science
  • Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services
We propose and implement a Privacy-preserving Federated Learning (PPFL) framework for mobile systems to limit privacy leakages in federated learning. Leveraging the widespread presence of Trusted Execution Environments (TEEs) in high-end and mobile devices, we utilize TEEs on clients for local training, and on servers for secure aggregation, so that model/gradient updates are hidden from adversaries. Challenged by the limited memory size of current TEEs, we leverage greedy layer-wise training… Expand
Separation of Powers in Federated Learning
TLDR
Truda is introduced, a new cross-silo FL system, employing a trustworthy and decentralized aggregation architecture to break down information concentration with regard to a single aggregator, and can fundamentally mitigate training data reconstruction attacks. Expand
Quantifying Information Leakage from Gradients
TLDR
This work first uses an adaptation of the empirical V-information to present an information-theoretic justification for the attack success rates in a layer-wise manner, and proposes more general and efficient metrics, using sensitivity and subspace distance to quantify the gradient changes w.r.t. original and latent information, respectively. Expand
The Rise and Fall of Fake News sites: A Traffic Analysis
TLDR
This paper builds a content-agnostic ML classifier for automatic detection of fake news websites (i.e., F1 score up to 0.942 and AUC of ROC up to0.976) that are not yet included in manually curated blacklists. Expand
Privacy-Preserving Machine Learning: Methods, Challenges and Directions
TLDR
A PGU model is proposed to guide evaluation for various PPML solutions through elaborately decomposing their privacy-preserving functionalities and is designed as the triad of Phase, Guarantee, and technical Utility. Expand
DarKnight: An Accelerated Framework for Privacy and Integrity Preserving Deep Learning Using Trusted Hardware
  • Hanieh Hashemi, Yongqin Wang, Murali Annavaram
  • 2021
Privacy and security-related concerns are growing as machine learning reaches diverse application domains. The data holders want to train or infer with private data while exploiting accelerators,Expand
Inter-operability and Orchestration in Heterogeneous Cloud/Edge Resources: The ACCORDION Vision
TLDR
The ACCORDION framework is introduced, a novel framework for the management of the cloud-edge continuum, targeting the support of NextGen applications with strong QoE requirements, and the main pillars that support it are discussed. Expand
Quantifying and Localizing Private Information Leakage from Neural Network Gradients
TLDR
This paper presents an adaptation of the V-information, which generalizes the empirical attack success rate and allows quantifying the amount of information that can leak from any chosen family of attack models, and proposes attackindependent measures, that only require the shared gradients, for quantifying both original and latent information leakages. Expand

References

SHOWING 1-10 OF 81 REFERENCES
Enabling Execution Assurance of Federated Learning at Untrusted Participants
TLDR
This paper proposes TrustFL, a practical scheme that leverages Trusted Execution Environments (TEEs) to build assurance of participants’ training executions with high confidence, and prototype TrustFL using GPU and SGX and evaluates its performance. Expand
Efficient and Private Federated Learning using TEE
Extended Abstract Background Federated Learning has received great attention since it enables edge devices to collaboratively train shared or personal models while keeping the raw training data localExpand
Differentially Private Federated Learning: A Client Level Perspective
TLDR
The aim is to hide clients' contributions during training, balancing the trade-off between privacy loss and model performance, and empirical studies suggest that given a sufficiently large number of participating clients, this procedure can maintain client-level differential privacy at only a minor cost in model performance. Expand
Practical Secure Aggregation for Privacy-Preserving Machine Learning
TLDR
This protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner, and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. Expand
DarkneTZ: towards model privacy at the edge using trusted execution environments
We present DarkneTZ, a framework that uses an edge device's Trusted Execution Environment (TEE) in conjunction with model partitioning to limit the attack surface against Deep Neural Networks (DNNs).Expand
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
TLDR
This work performs the first systematic study on local model poisoning attacks to federated learning, assuming an attacker has compromised some client devices, and the attacker manipulates the local model parameters on the compromised client devices during the learning process such that the global model has a large testing error rate. Expand
How To Backdoor Federated Learning
TLDR
This work designs and evaluates a new model-poisoning methodology based on model replacement and demonstrates that any participant in federated learning can introduce hidden backdoor functionality into the joint global model, e.g., to ensure that an image classifier assigns an attacker-chosen label to images with certain features. Expand
Layer-wise Characterization of Latent Information Leakage in Federated Learning
TLDR
Two new metrics are proposed that can localize the private information in each layer of a DNN and quantify the leakage depending on how sensitive the gradients are with respect to the latent information, and designed LatenTZ: a federated learning framework that lets the most sensitive layers to run in the clients' Trusted Execution Environments (TEE). Expand
Can You Really Backdoor Federated Learning?
TLDR
This paper conducts a comprehensive study of backdoor attacks and defenses for the EMNIST dataset, a real-life, user-partitioned, and non-iid dataset, and shows that norm clipping and "weak'' differential privacy mitigate the attacks without hurting the overall performance. Expand
FLaaS: Federated Learning as a Service
TLDR
Federated Learning as a Service (FLaaS) is presented, a system enabling different scenarios of 3rd-party application collaborative model building and addressing the consequent challenges of permission and privacy management, usability, and hierarchical model training, and FLaaS can be deployed in different operational environments. Expand
...
1
2
3
4
5
...