Private and communication-efficient edge learning: a sparse differential gaussian-masking distributed SGD approach

@article{Zhang2020PrivateAC,
  title={Private and communication-efficient edge learning: a sparse differential gaussian-masking distributed SGD approach},
  author={Xin Zhang and Minghong Fang and Jia Liu and Zhengyuan Zhu},
  journal={Proceedings of the Twenty-First International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing},
  year={2020}
}
  • Xin ZhangMinghong Fang Zhengyuan Zhu
  • Published 12 January 2020
  • Computer Science
  • Proceedings of the Twenty-First International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing
With the rise of machine learning (ML) and the proliferation of smart mobile devices, recent years have witnessed a surge of interest in performing ML in wireless edge networks. In this paper, we consider the problem of jointly improving data privacy and communication efficiency of distributed edge learning, both of which are critical performance metrics in wireless edge network computing. Toward this end, we propose a new distributed stochastic gradient method with sparse differential Gaussian… 

Figures and Tables from this paper

Differentially Private and Communication Efficient Collaborative Learning

This paper proposes two differentially private (DP) and communication efficient algorithms, called Q-DPSGD-1 and Q- dpsGD-2, which track the privacy loss of both approaches under the Renyi DP and provide convergence analysis for both convex and non-convex loss functions.

SoteriaFL: A Unified Framework for Private Federated Learning with Communication Compression

A framework called SoteriaFL is proposed, which accommodates a general family of local gradient estimators including popular stochastic variance-reduced gradient methods and the state-of-the-art shifted compression scheme, and is shown to achieve better communication complexity without sacrificing privacy nor utility than other private federated learning algorithms without communication compression.

CFedAvg: Achieving Efficient Communication and Fast Convergence in Non-IID Federated Learning

  • Haibo YangJia LiuE. Bentley
  • Computer Science
    2021 19th International Symposium on Modeling and Optimization in Mobile, Ad hoc, and Wireless Networks (WiOpt)
  • 2021
A communication-efficient algorithmic framework called CFedAvg for FL with non-i.i.d. FL datasets, which works with general (biased or unbiased) SNR-constrained compressors and extends to cases with heterogeneous local steps, which allows different workers to perform a different number of local steps to better adapt to their own circumstances.

Sparsified Privacy-Masking for Communication-Efficient and Privacy-Preserving Federated Learning

An explict end-to-end privacy guarantee of CPFed is provided using zero-concentrated differential privacy and its theoretical convergence rates for both convex and non-convex models are given.

Federated Learning with Sparsified Model Perturbation: Improving Accuracy under Client-Level Differential Privacy

A novel differentially-private FL scheme named Fed-SMP is developed that provides a client-level DP guarantee while maintaining high model accuracy and is demonstrated in improving model accuracy with the same DP guarantee and saving communication cost simultaneously.

Influence Function based Data Poisoning Attacks to Top-N Recommender Systems

This work shows that an attacker can launch a data poisoning attack to a recommender system to make recommendations as the attacker desires via injecting fake users with carefully crafted user-item interaction data, and develops several techniques to approximately solve the optimization problem.

On the Interaction Between Differential Privacy and Gradient Compression in Deep Learning

A detailed empirical study on how the Gaussian mechanism for differential privacy and gradient compression jointly impact test accuracy in deep learning finds that when employing aggressive sparsification or rank reduction to the gradients, test accuracy is less affected by theGaussian noise added by differential privacy.

References

SHOWING 1-10 OF 31 REFERENCES

cpSGD: Communication-efficient and differentially-private distributed SGD

This work extends and improves previous analysis of the Binomial mechanism showing that it achieves nearly the same utility as the Gaussian mechanism, while requiring fewer representation bits, which can be of independent interest.

LEASGD: an Efficient and Privacy-Preserving Decentralized Algorithm for Distributed Learning

A new learning algorithm LEASGD (Leader-Follower Elastic Averaging Stochastic Gradient Descent), which is driven by a novel Leader-F follower topology and a differential privacy model to achieve differential privacy with good convergence rate and low communication cost is proposed.

Communication-Efficient Network-Distributed Optimization with Differential-Coded Compressors

A new differential-coded compressed DGD (DC-DGD) algorithm that has the same low-complexity structure as the original DGD due to a self-noise-reduction effect and a hybrid compression scheme that offers a systematic mechanism to minimize the communication cost is proposed.

Robust and Communication-Efficient Collaborative Learning

The key technical contribution of this work is to prove that with non-vanishing noises for quantization and stochastic gradients, the proposed method exactly converges to the global optimal for convex loss functions, and finds a first-order stationary point in non-convex scenarios.

Compressed Distributed Gradient Descent: Communication-Efficient Consensus over Networks

A communication-efficient DGD-type algorithm called amplified-differential compression DGD (ADC-DGD) is developed and it is rigorously proved that it converges under any unbiased compression operator and advances the state-of-the-art of network consensus optimization theory.

Communication Compression for Decentralized Training

This paper develops a framework of quantized, decentralized training and proposes two different strategies, which are called extrapolation compression and difference compression, which outperforms the best of merely decentralized and merely quantized algorithm significantly for networks with high latency and low bandwidth.

Distributed Learning without Distress: Privacy-Preserving Empirical Risk Minimization

This work presents a distributed learning approach that combines differential privacy with secure multi-party computation, and explores two popular methods of differential privacy, output perturbations and gradient perturbation, and advances the state-of-the-art for both methods in the distributed learning setting.

Collaborative Deep Learning in Fixed Topology Networks

This paper presents a new consensus-based distributed SGD (CDSGD) (and its momentum variant, CDMSGD) algorithm for collaborative deep learning over fixed topology networks that enables data parallelization as well as decentralized computation.

DP-LSSGD: A Stochastic Optimization Method to Lift the Utility in Privacy-Preserving ERM

A DP Laplacian smoothing SGD (DP-LSSGD) to train ML models with differential privacy (DP) guarantees that makes training both convex and nonconvex ML models more stable and enables the trained models to generalize better.

Differentially Private Gossip Gradient Descent

A differentially private distributed algorithm, called private gossi» gradient descent, is proposed, which enables all N agents to converge to the true model, with a performance comparable to that of conventional centralized algorithms.