• Corpus ID: 219530984

Secure Byzantine-Robust Machine Learning

@article{He2020SecureBM,
  title={Secure Byzantine-Robust Machine Learning},
  author={Lie He and Sai Praneeth Karimireddy and Martin Jaggi},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.04747}
}
Increasingly machine learning systems are being deployed to edge servers and devices (e.g. mobile phones) and trained in a collaborative manner. Such distributed/federated/decentralized training raises a number of concerns about the robustness, privacy, and security of the procedure. While extensive work has been done in tackling with robustness, privacy, or security individually, their combination has rarely been studied. In this paper, we propose a secure two-server protocol that offers both… 

Figures from this paper

FastSecAgg: Scalable Secure Aggregation for Privacy-Preserving Federated Learning
TLDR
This paper proposes a secure aggregation protocol, FastSecAgg, that is efficient in terms of computation and communication, and robust to client dropouts, and guarantees security against adaptive adversaries, which can perform client corruptions dynamically during the execution of the protocol.
Byzantine-Resilient Secure Federated Learning
TLDR
This paper presents the first single-server Byzantine-resilient secure aggregation framework (BREA) for secure federated learning, based on an integrated stochastic quantization, verifiable outlier detection, and secure model aggregation approach to guarantee Byzantine- Resilience, privacy, and convergence simultaneously.
Robust Aggregation for Adaptive Privacy Preserving Federated Learning in Healthcare
TLDR
The results show that privacy preserving methods can be successfully applied alongside Byzantine-robust aggregation techniques in FL and show that such methods can detect and discard faulty or malicious local clients during training.
Machine Learning and Optimization Laboratory Privacy-preserving and Personalized Federated Machine Learning for Medical Data Semester project
TLDR
In this report, two novel approaches to personalized cross-silo federated learning are introduced and discussed from a theoretical perspective: the adapted Ndoye factor, and the Weight Erosion aggregation scheme.
Weight Erosion: An Update Aggregation Scheme for Personalized Collaborative Machine Learning
TLDR
It is demonstrated that the novel Weight Erosion scheme can outperform two baseline FL aggregation schemes on a classification task, and is more resistant to over-fitting and non-IID data sets.
Byzantine-Robust Federated Learning with Optimal Statistical Rates and Privacy Guarantees
TLDR
It is remarked that the Byzantine-robust federated learning protocols with bucketing can be naturally combined with privacy-guaranteeing procedures to introduce security against a semi-honest server.
Privacy-Preserving Aggregation in Federated Learning: A Survey
TLDR
This survey aims to fill the gap between a large number of studies on PPFL, where PPAGG is adopted to provide a privacy guarantee, and the lack of a comprehensive survey on the PPAgg protocols applied in FL systems.
LightSecAgg: a Lightweight and Versatile Design for Secure Aggregation in Federated Learning
TLDR
It is shown that LightSecAgg achieves the same privacy and dropout-resiliency guarantees as the state-of-the-art protocols while significantly reducing the overhead for resiliency against dropped users and can be applied to secure aggregation in the asynchronous FL setting.
Private Retrieval, Computing, and Learning: Recent Progress and Future Challenges
TLDR
The article motivates each privacy setting, describes the problem formulation, summarizes breakthrough results in the history of each problem, and gives recent results and discusses some of the major ideas that emerged in each field.
From Distributed Machine Learning to Federated Learning: A Survey
TLDR
This paper proposes a functional architecture of federated learning systems and a taxonomy of related techniques and presents four widely used federated systems based on the functional architecture.
...
...

References

SHOWING 1-10 OF 45 REFERENCES
Practical Secure Aggregation for Privacy-Preserving Machine Learning
TLDR
This protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner, and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network.
Secure Computation for Machine Learning With SPDZ
TLDR
This project investigates the efficiency of the SPDZ framework, which provides an implementation of an MPC protocol with malicious security, in the context of popular machine learning algorithms, and chooses applications such as linear regression and logistic regression, which have been implemented and evaluated using semi-honest MPC techniques.
SecureML: A System for Scalable Privacy-Preserving Machine Learning
TLDR
This paper presents new and efficient protocols for privacy preserving machine learning for linear regression, logistic regression and neural network training using the stochastic gradient descent method, and implements the first privacy preserving system for training neural networks.
Robust Aggregation for Federated Learning
TLDR
The experiments show that RFA is competitive with the classical aggregation when the level of corruption is low, while demonstrating greater robustness under high corruption, and establishes the convergence of the robust federated learning algorithm for the stochastic learning of additive models with least squares.
Robust Federated Learning in a Heterogeneous Environment
TLDR
A general statistical model is proposed which takes both the cluster structure of the users and the Byzantine machines into account and proves statistical guarantees for an outlier-robust clustering algorithm, which can be considered as the Lloyd algorithm with robust estimation.
Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging
TLDR
This paper introduces Adaptive Federated Averaging, a novel algorithm for robust federated learning that is designed to detect failures, attacks, and bad updates provided by participants in a collaborative model, and proposes a Hidden Markov Model to model and learn the quality of model Updates provided by each participant during training.
DeepSecure: Scalable Provably-Secure Deep Learning
TLDR
The DeepSecure framework is the first to empower accurate and scalable DL analysis of data generated by distributed clients without sacrificing the security to maintain efficiency and introduces a set of novel low-overhead pre-processing techniques which further reduce the GC overall runtime in the context of DL.
Advances and Open Problems in Federated Learning
TLDR
Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
NIKE-based Fast Privacy-preserving High-dimensional Data Aggregation for Mobile Devices
TLDR
This paper presents a non-interactive pairwise key generation scheme for mobile users where theNon-interactivity among users is achieved by outsourcing the keying material generation task to two non-colluding cryptographic secret providers and designs an efficient aggregate sum scheme that has low communication and computation overheads and the failure-robust property.
Partially Encrypted Machine Learning using Functional Encryption
TLDR
A practical framework to perform partially encrypted and privacy-preserving predictions which combines adversarial training and functional encryption is proposed and a training method to prevent selected sensitive features from leaking is proposed, which adversarially optimizes the network against an adversary trying to identify these features.
...
...