ScionFL: Secure Quantized Aggregation for Federated Learning

  title={ScionFL: Secure Quantized Aggregation for Federated Learning},
  author={Yaniv Ben-Itzhak and Helen Mollering and Benny Pinkas and T. Schneider and Ajith Suresh and Oleksandr Tkachenko and Shay Vargaftik and Christian Weinert and Hossein Yalame and Avishay Yanai},
Privacy concerns in federated learning (FL) are commonly addressed with secure aggregation schemes that prevent a central party from observing plaintext client updates. How-ever, most such schemes neglect orthogonal FL research that aims at reducing communication between clients and the aggregator and is instrumental in facilitating cross-device FL with thousands and even millions of (mobile) participants. In particular, quantization techniques can typically reduce client-server communication… 



The Fundamental Price of Secure Aggregation in Differentially Private Federated Learning

This work theoretically and empirically specifies the fundamental price of using SecAgg and shows that it can reduce the communication cost to under 1 .

SAFELearn: Secure Aggregation for private FEderated Learning

This work presents SAFELearn, a generic design for efficient private FL systems that protects against inference attacks that have to analyze individual clients’ model updates using secure aggregation, and implements and benchmark an instantiation of the generic design with secure two-party computation.

Efficient Sparse Secure Aggregation for Federated Learning

This article adapts compression-based federated techniques to additive secret sharing, leading to an efficient secure aggregation protocol, with an adaptable security level, that proves its privacy against malicious adversaries and its correctness in the semi-honest setting.

Scotch: An Efficient Secure Computation Framework for Secure Aggregation

SCOTCH is a decentralized mparty secure-computation framework for federated aggregation that deploys MPC primitives, such as secret sharing that provides strict privacy guarantees against curious aggregators or colluding data-owners with minimal communication overheads compared to other existing state-of-the-art privacy-preserving federated learning frameworks.

FastSecAgg: Scalable Secure Aggregation for Privacy-Preserving Federated Learning

This paper proposes a secure aggregation protocol, FastSecAgg, that is efficient in terms of computation and communication, and robust to client dropouts, and guarantees security against adaptive adversaries, which can perform client corruptions dynamically during the execution of the protocol.

LightSecAgg: Rethinking Secure Aggregation in Federated Learning

It is shown that LightSecAgg achieves the same privacy and dropout-resiliency guarantees as the state-of-the-art protocols while significantly reducing the overhead for resiliency against dropped users and can be applied to secure aggregation in the asynchronous FL setting.

Secure Single-Server Aggregation with (Poly)Logarithmic Overhead

The first constructions for secure aggregation that achieve polylogarithmic communication and computation per client are presented and an application of secure aggregation to the task of secure shuffling is shown which enables the first cryptographically secure instantiation of the shuffle model of differential privacy.

Secure aggregation for federated learning in flower

Salvia is an implementation of SA for Python users in the Flower FL framework based on the SecAgg(+) protocols for a semi-honest threat model, which is robust against client dropouts and exposes a flexible and easy-to-use API that is compatible with various machine learning frameworks.

Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning

This article proposes the first secure aggregation framework, named Turbo-Aggregate, which employs a multi-group circular strategy for efficient model aggregation, and leverages additive secret sharing and novel coding techniques for injecting aggregation redundancy in order to handle user dropouts while guaranteeing user privacy.

POSEIDON: Privacy-Preserving Federated Neural Network Learning

A novel system, POSEIDON, is proposed, the first of its kind in the regime of privacy-preserving neural network training, employing multiparty lattice-based cryptography and preserving the confidentiality of the training data, the model, and the evaluation data, under a passive-adversary model and collusions between up to $N-1 parties.