Correlated quantization for distributed mean estimation and optimization

@article{Suresh2022CorrelatedQF,
  title={Correlated quantization for distributed mean estimation and optimization},
  author={Ananda Theertha Suresh and Ziteng Sun and Jae Hun Ro and Felix X. Yu},
  journal={ArXiv},
  year={2022},
  volume={abs/2203.04925}
}
We study the problem of distributed mean estimation and optimization under communication constraints. We propose a correlated quantization protocol whose error guarantee depends on the deviation of data points instead of their absolute range. The design doesn’t need any prior knowledge on the concentration property of the dataset, which is required to get such dependence in previous works. We show that applying the proposed protocol as sub-routine in distributed optimization algorithms leads to… 

Figures and Tables from this paper

QUIC-FL: Quick Unbiased Compression for Federated Learning
TLDR
This paper proposes QUIC-FL, a DME algorithm that is unbiased, offers fast aggregation time, and is competitive with the most accurate (slow aggregation) DME techniques.

References

SHOWING 1-10 OF 35 REFERENCES
Distributed Mean Estimation with Limited Communication
TLDR
This work shows that applying a structured random rotation before quantization and a better coding strategy further reduces the error to O(1/n) and shows that the latter coding strategy is optimal up to a constant in the minimax sense i.e., it achieves the best MSE for a given communication cost.
New Bounds For Distributed Mean Estimation and Variance Reduction
TLDR
This paper provides a method of quantization which allows distributed mean estimation to be performed with solution quality dependent only on the distance between inputs, not on input norm, and shows an analogous result for distributed variance reduction.
Wyner-Ziv Estimators: Efficient Distributed Mean Estimation with Side Information
TLDR
Without any probabilistic assumptions on the underlying data, the problem of distributed mean estimation where the server has access to side information is studied and Wyner-Ziv estimators are proposed, which are communication and computationally efficient and near-optimal when an upper bound for the distance between the side information and the data is known.
DRIVE: One-bit Distributed Mean Estimation
We consider the problem where n clients transmit d-dimensional real-valued vectors using dp1 ` op1qq bits each, in a manner that allows the receiver to approximately reconstruct their mean. Such
On Communication Cost of Distributed Statistical Estimation and Dimensionality
TLDR
It is conjecture that the tradeoff between communication and squared loss demonstrated by this protocol is essentially optimal up to logarithmic factor, and the strong lower bounds in the general setting are initiated.
Distributed Estimation with Multiple Samples per User: Sharp Rates and Phase Transition
TLDR
A tight characterization (up to logarithmic factors) of the error rate as a function of m, ̧, the domain size, and the number of users under most regimes of interest is obtained.
Distributed Gaussian Mean Estimation under Communication Constraints: Optimal Rates and Communication-Efficient Algorithms
TLDR
Although optimal estimation of a Gaussian mean is relatively simple in the conventional setting, it is quite involved under the communication constraints, both in terms of the optimal procedure design and lower bound argument.
Communication lower bounds for statistical estimation problems via a distributed data processing inequality
TLDR
A distributed data processing inequality is proved, as a generalization of usual data processing inequalities, which might be of independent interest and useful for other problems.
Information-theoretic lower bounds for distributed statistical estimation with communication constraints
TLDR
Lower bounds on minimax risks for distributed statistical estimation under a communication budget are established for several problems, including various types of location models, as well as for parameter estimation in regression models.
Permutation Compressors for Provably Faster Distributed Nonconvex Optimization
TLDR
The theory of MARINA is extended to support a much wider class of potentially correlated compressors, extending the reach of the method beyond the classical independent compressors setting, and a new quantity, for which the name Hessian variance is coined, allows the original analysis of MARina to be significantly refined.
...
...