Differential Privacy Meets Federated Learning under Communication Constraints

@article{Mohammadi2021DifferentialPM,
  title={Differential Privacy Meets Federated Learning under Communication Constraints},
  author={Nima Mohammadi and Jianan Bai and Qiang Fan and Yifei Song and Yang Cindy Yi and Lingjia Liu},
  journal={ArXiv},
  year={2021},
  volume={abs/2101.12240}
}
The performance of federated learning systems is bottlenecked by communication costs and training variance. The communication overhead problem is usually addressed by three communication-reduction techniques, namely, model compression, partial device participation, and periodic aggregation, at the cost of increased training variance. Different from traditional distributed learning systems, federated learning suffers from data heterogeneity (since the devices sample their data from possibly… 

Figures from this paper

Federated Dynamic Spectrum Access

TLDR
This article introduces a Federated Learning (FL) based framework for the task of DSA, where FL is a distributive machine learning framework that can reserve the privacy of network terminals under heterogeneous data distributions.

PASS: Parameters Audit-based Secure and Fair Federated Learning Scheme against Free Rider

TLDR
The PASS has the following key features: works well in the scenario where adversaries are more than 50% of the total amount of clients, is effective in countering anonymous FR attacks and SFR attacks, and prevents from privacy leakage without accuracy loss.

Policy-based Fully Spiking Reservoir Computing for Multi-Agent Distributed Dynamic Spectrum Access

TLDR
A homeostatic learning rule is employed for adaptively tuning small-world reservoir connections to maintain near-chaotic behavior during operation in spiking neural networks in a reinforcement learning setup designed for the dynamic spectrum sharing scenario.

References

SHOWING 1-10 OF 33 REFERENCES

D2P-Fed: Differentially Private Federated Learning With Efficient Communication

TLDR
The results show that D2P-Fed outperforms the-state-of-the-art by 4.7% to 13.0% in terms of model accuracy while saving one third of the communication cost.

LDP-FL: Practical Private Aggregation in Federated Learning with Local Differential Privacy

TLDR
A novel design of local differential privacy mechanism for federated learning that makes the local weights update differentially private by adapting to the varying ranges at different layers of a deep neural network, which introduces a smaller variance of the estimated model weights, especially for deeper models.

FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization

TLDR
FedPAQ is presented, a communication-efficient Federated Learning method with Periodic Averaging and Quantization that achieves near-optimal theoretical guarantees for strongly convex and non-convex loss functions and empirically demonstrate the communication-computation tradeoff provided by the method.

cpSGD: Communication-efficient and differentially-private distributed SGD

TLDR
This work extends and improves previous analysis of the Binomial mechanism showing that it achieves nearly the same utility as the Gaussian mechanism, while requiring fewer representation bits, which can be of independent interest.

Federated Optimization in Heterogeneous Networks

TLDR
This work introduces a framework, FedProx, to tackle heterogeneity in federated networks, and provides convergence guarantees for this framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work.

SCAFFOLD: Stochastic Controlled Averaging for On-Device Federated Learning

TLDR
A new Stochastic Controlled Averaging algorithm (SCAFFOLD) which uses control variates to reduce the drift between different clients and it is proved that the algorithm requires significantly fewer rounds of communication and benefits from favorable convergence guarantees.

Federated Learning: Strategies for Improving Communication Efficiency

TLDR
Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling.

On the Convergence of Federated Optimization in Heterogeneous Networks

TLDR
This work proposes and introduces \fedprox, which is similar in spirit to \fedavg, but more amenable to theoretical analysis, and describes the convergence of \fed Prox under a novel \textit{device similarity} assumption.

How To Backdoor Federated Learning

TLDR
This work designs and evaluates a new model-poisoning methodology based on model replacement and demonstrates that any participant in federated learning can introduce hidden backdoor functionality into the joint global model, e.g., to ensure that an image classifier assigns an attacker-chosen label to images with certain features.

Secure and Utility-Aware Data Collection with Condensed Local Differential Privacy

TLDR
This paper addresses the small user population problem by introducing the concept of Condensed Local Differential Privacy (CLDP) as a specialization of LDP, and develops a suite of CLDP protocols that offer desirable statistical utility while preserving privacy.