DeTrust-FL: Privacy-Preserving Federated Learning in Decentralized Trust Setting

@article{Xu2022DeTrustFLPF,
  title={DeTrust-FL: Privacy-Preserving Federated Learning in Decentralized Trust Setting},
  author={Runhua Xu and Nathalie Baracaldo and Yi Zhou and Ali Anwar and Swanand Kadhe and Heiko Ludwig},
  journal={2022 IEEE 15th International Conference on Cloud Computing (CLOUD)},
  year={2022},
  pages={417-426}
}
Federated learning has emerged as a privacy-preserving machine learning approach where multiple parties can train a single model without sharing their raw training data. Federated learning typically requires the utilization of multi-party computation techniques to provide strong privacy guarantees by ensuring that an untrusted or curious aggregator cannot obtain isolated replies from parties involved in the training process, thereby preventing potential inference attacks. Until recently, it was… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 28 REFERENCES

A Hybrid Approach to Privacy-Preserving Federated Learning

This paper presents an alternative approach that utilizes both differential privacy and SMC to balance these trade-offs and enables the growth of noise injection as the number of parties increases without sacrificing privacy while maintaining a pre-defined rate of trust.

FedV: Privacy-Preserving Federated Learning over Vertically Partitioned Data

FedV is proposed, a framework for secure gradient computation in vertical settings for several widely used ML models such as linear models, logistic regression, and support vector machines that removes the need for peer-to-peer communication among parties by using functional encryption schemes and works for larger and changing sets of parties.

Gradient Disaggregation: Breaking Privacy in Federated Learning by Reconstructing the User Participant Matrix

The attack enables the attribution of learned properties to individual users, violating anonymity, and shows that a determined central server may undermine the secure aggregation protocol to break individual users’ data privacy in federated learning.

HybridAlpha: An Efficient Approach for Privacy-Preserving Federated Learning

Evaluation against existing crypto-based SMC solutions shows that HybridAlpha can reduce the training time and data volume exchanged using a federated learning process to train a CNN on the MNIST data set while providing the same model performance and privacy guarantees as the existing solutions.

FastSecAgg: Scalable Secure Aggregation for Privacy-Preserving Federated Learning

This paper proposes a secure aggregation protocol, FastSecAgg, that is efficient in terms of computation and communication, and robust to client dropouts, and guarantees security against adaptive adversaries, which can perform client corruptions dynamically during the execution of the protocol.

Secure and Efficient Federated Transfer Learning

This work aims towards enhancing the efficiency and security of existing models for practical collaborative training under a data federation by incorporating Secret Sharing (SS), and improves upon the previous solution, and allows malicious players who can arbitrarily deviate from the protocol in the FTL model.

Boosting Privately: Privacy-Preserving Federated Extreme Boosting for Mobile Crowdsensing

A secret sharing based federated extreme boosting learning frame-work (FedXGB) to achieve privacy-preserving model training for mobile crowdsensing and is secure in the honest-but-curious model, and attains approximate accuracy and convergence rate with the original model in low runtime.

Practical Secure Aggregation for Privacy-Preserving Machine Learning

This protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner, and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network.

Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks

It is shown that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset.

Revisiting Secure Computation Using Functional Encryption: Opportunities and Research Directions

  • Runhua XuJ. Joshi
  • Computer Science, Mathematics
    2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA)
  • 2020
This paper revisits the secure computation problem using emerging and promising functional encryption techniques and presents a comprehensive study, elaborate on the unique characteristics and challenges of emerging functional encryption based secure computation approaches and outline several research directions.