• Corpus ID: 244729252

Communication-Efficient Federated Learning via Quantized Compressed Sensing

@inproceedings{Oh2021CommunicationEfficientFL,
  title={Communication-Efficient Federated Learning via Quantized Compressed Sensing},
  author={Yong-Nam Oh and Namyoon Lee and Yo-Seb Jeon and H. Vincent Poor},
  year={2021}
}
In this paper, we present a communication-efficient federated learning framework inspired by quantized compressed sensing. The presented framework consists of gradient compression for wireless devices and gradient reconstruction for a parameter server (PS). Our strategy for gradient compression is to sequentially perform block sparsification, dimensional reduction, and quantization. Thanks to gradient sparsification and quantization, our strategy can achieve a higher compression ratio than one… 

Figures and Tables from this paper

FedVQCS: Federated Learning via Vector Quantized Compressed Sensing
TLDR
Simulation results on the MNIST and CIFAR-10 datasets demonstrate that the proposed framework provides more than a 2.5 % increase in classification accuracy compared to state-of-art FL frameworks when the communication overhead of the local model update transmission is less than 0.1 bit per local model entry.

References

SHOWING 1-10 OF 40 REFERENCES
Quantized Compressed Sensing for Communication-Efficient Federated Learning
TLDR
A communication-efficient FL framework which consists of gradient compression and reconstruction strategies based on quantized compressed sensing (QCS) and a expectation-maximization generalized-approximate-message-passing algorithm is presented.
Communication-efficient Federated Learning Through 1-Bit Compressive Sensing and Analog Aggregation
TLDR
Simulation results show that the proposed 1-bit CS based FL over the air achieves comparable performance to the ideal case where conventional FL without compression and quantification is applied over error-free aggregation, at much reduced communication overhead and transmission latency.
Bayesian Federated Learning over Wireless Networks
TLDR
This paper proposes a Bayesian federated learning algorithm to aggregate the heterogeneous quantized gradient information optimally in the sense of minimizing the mean-squared error (MSE), and provides a convergence analysis of SBFL for a class of non-convex loss functions.
Communication-Efficient Federated Learning Based on Compressed Sensing
TLDR
Two new FL algorithms based on compressed sensing referred to as the CS-FL algorithm and the 1-bit CS- FL algorithm are proposed, both of which compress the upstream and downstream data while communicating between the clients and the central server.
Federated Learning Over Wireless Fading Channels
TLDR
Results show clear advantages for the proposed analog over-the-air DSGD scheme, which suggests that learning and communication algorithms should be designed jointly to achieve the best end-to-end performance in machine learning applications at the wireless edge.
Message-Passing De-Quantization With Applications to Compressed Sensing
TLDR
This paper develops message-passing de-quantization algorithms for minimum mean-squared error estimation of a random vector from quantized linear measurements, notably allowing the linear expansion to be overcomplete or undercomplete and the scalar quantization to be regular or non-regular.
High-Dimensional Stochastic Gradient Quantization for Communication-Efficient Edge Learning
TLDR
This work proposes a novel gradient compression scheme that shows that similar learning performance can be achieved with substantially lower communication overhead as compared to the one-bit scalar quantization schemes used in the state-of-the-art design, namely signed SGD.
Federated Learning: Strategies for Improving Communication Efficiency
TLDR
Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling.
Quantized Compressive Sampling of Stochastic Gradients for Efficient Communication in Distributed Deep Learning
TLDR
Quantized Compressive Sampling (QCS) of SG is proposed that addresses the above two issues while achieving an arbitrarily large compression gain and develops and analyzes a method to both control the overall variance of the compressed SG and prevent the staleness of the updates.
QSGD: Randomized Quantization for Communication-Optimal Stochastic Gradient Descent
TLDR
Quantized SGD is proposed, a family of compression schemes which allow the compression of gradient updates at each node, while guaranteeing convergence under standard assumptions, and allows the user to trade off compression and convergence time.
...
1
2
3
4
...