• Corpus ID: 219530984

Secure Byzantine-Robust Machine Learning

@article{He2020SecureBM,
  title={Secure Byzantine-Robust Machine Learning},
  author={Lie He and Sai Praneeth Karimireddy and Martin Jaggi},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.04747}
}
Increasingly machine learning systems are being deployed to edge servers and devices (e.g. mobile phones) and trained in a collaborative manner. Such distributed/federated/decentralized training raises a number of concerns about the robustness, privacy, and security of the procedure. While extensive work has been done in tackling with robustness, privacy, or security individually, their combination has rarely been studied. In this paper, we propose a secure two-server protocol that offers both… 

Figures from this paper

FastSecAgg: Scalable Secure Aggregation for Privacy-Preserving Federated Learning
TLDR
This paper proposes a secure aggregation protocol, FastSecAgg, that is efficient in terms of computation and communication, and robust to client dropouts, and guarantees security against adaptive adversaries, which can perform client corruptions dynamically during the execution of the protocol.
Robust Aggregation for Adaptive Privacy Preserving Federated Learning in Healthcare
TLDR
The results show that privacy preserving methods can be successfully applied alongside Byzantine-robust aggregation techniques in FL and show that such methods can detect and discard faulty or malicious local clients during training.
Machine Learning and Optimization Laboratory Privacy-preserving and Personalized Federated Machine Learning for Medical Data Semester project
TLDR
In this report, two novel approaches to personalized cross-silo federated learning are introduced and discussed from a theoretical perspective: the adapted Ndoye factor, and the Weight Erosion aggregation scheme.
Weight Erosion: An Update Aggregation Scheme for Personalized Collaborative Machine Learning
TLDR
It is demonstrated that the novel Weight Erosion scheme can outperform two baseline FL aggregation schemes on a classification task, and is more resistant to over-fitting and non-IID data sets.
From Distributed Machine Learning to Federated Learning: A Survey
TLDR
This paper proposes a functional architecture of federated learning systems and a taxonomy of related techniques and presents four widely used federated systems based on the functional architecture.
Privacy-Preserving Aggregation in Federated Learning: A Survey
TLDR
This survey aims to fill the gap between a large number of studies on PPFL, where PPAGG is adopted to provide a privacy guarantee, and the lack of a comprehensive survey on the PPAgg protocols applied in FL systems.
Private Retrieval, Computing, and Learning: Recent Progress and Future Challenges
TLDR
The article motivates each privacy setting, describes the problem formulation, summarizes breakthrough results in the history of each problem, and gives recent results and discusses some of the major ideas that emerged in each field.
A Field Guide to Federated Optimization
TLDR
This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms through concrete examples and practical implementation, with a focus on conducting effective simulations to infer real-world performance.
A Survey on Fault-tolerance in Distributed Optimization and Machine Learning
  • Shuo Liu
  • Computer Science, Engineering
    ArXiv
  • 2021
TLDR
This survey investigates the current state of fault-Tolerance research in distributed optimization, and aims to provide an overview of the existing studies on both fault-tolerant distributed optimization theories and applicable algorithms.
Breaking the centralized barrier for cross-device federated learning
TLDR
This work proposes a general algorithmic framework, MIME, which mitigates client drift and adapts an arbitrary centralized optimization algorithm such as momentum and Adam to the cross-device federated learning setting and proves that MIME is provably faster than any centralized method.
...
1
2
...

References

SHOWING 1-10 OF 45 REFERENCES
Practical Secure Aggregation for Privacy-Preserving Machine Learning
TLDR
This protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner, and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network.
Secure Computation for Machine Learning With SPDZ
TLDR
This project investigates the efficiency of the SPDZ framework, which provides an implementation of an MPC protocol with malicious security, in the context of popular machine learning algorithms, and chooses applications such as linear regression and logistic regression, which have been implemented and evaluated using semi-honest MPC techniques.
SecureML: A System for Scalable Privacy-Preserving Machine Learning
TLDR
This paper presents new and efficient protocols for privacy preserving machine learning for linear regression, logistic regression and neural network training using the stochastic gradient descent method, and implements the first privacy preserving system for training neural networks.
Robust Aggregation for Federated Learning
TLDR
The experiments show that RFA is competitive with the classical aggregation when the level of corruption is low, while demonstrating greater robustness under high corruption, and establishes the convergence of the robust federated learning algorithm for the stochastic learning of additive models with least squares.
Robust Federated Learning in a Heterogeneous Environment
TLDR
A general statistical model is proposed which takes both the cluster structure of the users and the Byzantine machines into account and proves statistical guarantees for an outlier-robust clustering algorithm, which can be considered as the Lloyd algorithm with robust estimation.
Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging
TLDR
This paper introduces Adaptive Federated Averaging, a novel algorithm for robust federated learning that is designed to detect failures, attacks, and bad updates provided by participants in a collaborative model, and proposes a Hidden Markov Model to model and learn the quality of model Updates provided by each participant during training.
DeepSecure: Scalable Provably-Secure Deep Learning
TLDR
The DeepSecure framework is the first to empower accurate and scalable DL analysis of data generated by distributed clients without sacrificing the security to maintain efficiency and introduces a set of novel low-overhead pre-processing techniques which further reduce the GC overall runtime in the context of DL.
Advances and Open Problems in Federated Learning
TLDR
Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
NIKE-based Fast Privacy-preserving High-dimensional Data Aggregation for Mobile Devices
TLDR
This paper presents a non-interactive pairwise key generation scheme for mobile users where theNon-interactivity among users is achieved by outsourcing the keying material generation task to two non-colluding cryptographic secret providers and designs an efficient aggregate sum scheme that has low communication and computation overheads and the failure-robust property.
Partially Encrypted Machine Learning using Functional Encryption
TLDR
A practical framework to perform partially encrypted and privacy-preserving predictions which combines adversarial training and functional encryption is proposed and a training method to prevent selected sensitive features from leaking is proposed, which adversarially optimizes the network against an adversary trying to identify these features.
...
1
2
3
4
5
...