• Corpus ID: 211572640

# On Biased Compression for Distributed Learning

@article{Beznosikov2020OnBC,
title={On Biased Compression for Distributed Learning},
author={Aleksandr Beznosikov and Samuel Horvath and Peter Richt{\'a}rik and M. H. Safaryan},
journal={ArXiv},
year={2020},
volume={abs/2002.12410}
}
• Published 27 February 2020
• Computer Science
• ArXiv
In the last few years, various communication compression techniques have emerged as an indispensable tool helping to alleviate the communication bottleneck in distributed learning. However, despite the fact {\em biased} compressors often show superior performance in practice when compared to the much more studied and understood {\em unbiased} compressors, very little is known about them. In this work we study three classes of biased compression operators, two of which are new, and their…
74 Citations

## Figures and Tables from this paper

On Communication Compression for Distributed Optimization on Heterogeneous Data
The results indicate that D-EF-SGD is much less affected than D-QSGD by non-iid data, but both methods can suffer a slowdown if data-skewness is high.
A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning
• Computer Science
ICLR
• 2021
This paper proposes a construction which can transform any contractive compressor into an induced unbiased compressor, and shows that this approach leads to vast improvements over EF, including reduced memory requirements, better communication complexity guarantees and fewer assumptions.
Shifted Compression Framework: Generalizations and Improvements
• Computer Science
ArXiv
• 2022
This work develops a uniﬁed framework for studying lossy compression methods, which incorporates methods compressing both gradients and models, using unbiased and biased compressors, and sheds light on the construction of the auxiliary vectors.
Distributed Methods with Absolute Compression and Error Compensation
• Computer Science
• 2022
The analysis of EC-SGD with absolute compression to the arbitrary sampling strategy is generalized and the rates improve upon the previously known ones in this setting and the proposed analysis ofEC-LSVRG withabsolute compression for (strongly) convex problems is proposed.
Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with Communication Compression
• Computer Science
ArXiv
• 2022
A convergence lower bound for algorithms whether using unbiased or contractive compressors in unidirection or bidirection is established and an algorithm is proposed, NEOLITHIC, which almost reaches the lower bound (up to logarithm factors) under mild conditions.
th Annual Workshop on Optimization for Machine Learning Error Compensated Distributed SGD can be Accelerated
This work proposes and studies the error compensated loopless Katyusha method, and establishes an accelerated linear convergence rate under standard assumptions, and shows for the first time that error compensated gradient compression methods can be accelerated.
• Computer Science
AISTATS
• 2022
It is proved that the proposed communication-eﬃcient distributed adaptive gradient method converges to the ﬁrst-order stationary point with the same iteration complexity as uncompressed vanilla AMSGrad in the stochastic nonconvex optimization setting.
Error Compensated Distributed SGD Can Be Accelerated
• Computer Science
NeurIPS
• 2021
This work proposes and studies the error compensated loopless Katyusha method, and establishes an accelerated linear convergence rate under standard assumptions, and shows for the first time that error compensated gradient compression methods can be accelerated.
Optimal Gradient Compression for Distributed and Federated Learning
• Computer Science
ArXiv
• 2020
This paper investigates the fundamental trade-off between the number of bits needed to encode compressed vectors and the compression error, and introduces an efficient compression operator, Sparse Dithering, which naturally achieves the lower bound.
Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation
• Computer Science
ArXiv
• 2022
This work proves that the recently developed class of three point compressors (3PC) of Richtárik et al. can be generalized to Hessian communication as well, and discovered several new 3PC mechanisms, such as adaptive thresholding and Bernoulli aggregation, which require reduced communication and occasional Hessian computations.

## References

SHOWING 1-10 OF 51 REFERENCES
Natural Compression for Distributed Deep Learning
• Computer Science
ArXiv
• 2019
This work introduces a new, simple yet theoretically and practically effective compression technique: em natural compression (NC), which is applied individually to all entries of the to-be-compressed update vector and works by randomized rounding to the nearest (negative or positive) power of two, which can be computed in a "natural" way by ignoring the mantissa.
Stochastic Distributed Learning with Gradient Quantization and Variance Reduction
• Computer Science
• 2019
These are the first methods that achieve linear convergence for arbitrary quantized updates in distributed optimization where the objective function is spread among different devices, each sending incremental model updates to a central server.
Global Momentum Compression for Sparse Communication in Distributed SGD
• Computer Science
ArXiv
• 2019
This is the first work that proves the convergence of distributed momentum SGD~(DMSGD) with sparse communication and memory gradient, and theoretically prove the convergence rate of GMC for both convex and non-convex problems.
A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning
• Computer Science
ICLR
• 2021
This paper proposes a construction which can transform any contractive compressor into an induced unbiased compressor, and shows that this approach leads to vast improvements over EF, including reduced memory requirements, better communication complexity guarantees and fewer assumptions.
Sparsified SGD with Memory
• Computer Science
NeurIPS
• 2018
This work analyzes Stochastic Gradient Descent with k-sparsification or compression (for instance top-k or random-k) and shows that this scheme converges at the same rate as vanilla SGD when equipped with error compensation.
3LC: Lightweight and Effective Traffic Compression for Distributed Machine Learning
• Computer Science
MLSys
• 2019
3LC is presented, a lossy compression scheme for state change traffic that strikes balance between multiple goals: traffic reduction, accuracy, computation overhead, and generality.
Stochastic Sign Descent Methods: New Algorithms and Better Theory
• Computer Science
ICML
• 2021
A new sign-based method is proposed, Stochastic Sign Descent with Momentum (SSDM), which converges under standard bounded variance assumption with the optimal asymptotic rate and is validated with numerical experiments.
Error Feedback Fixes SignSGD and other Gradient Compression Schemes
• Computer Science
ICML
• 2019
It is proved that the algorithm EF-SGD with arbitrary compression operator achieves the same rate of convergence as SGD without any additional assumptions, and thus EF- SGD achieves gradient compression for free.
Sparse Gradient Compression for Distributed SGD
• Computer Science
DASFAA
• 2019
The experiments over the sparse high-dimensional models and deep neural networks indicate that SGC can compress 99.99% gradients for every iteration without performance degradation, and saves the communication cost up to 48$$\times$$.
Distributed Learning with Compressed Gradient Differences
• Computer Science
ArXiv
• 2019
This work proposes a new distributed learning method --- DIANA --- which resolves issues via compression of gradient differences, and performs a theoretical analysis in the strongly convex and nonconvex settings and shows that its rates are superior to existing rates.