• Corpus ID: 14999259

Federated Learning: Strategies for Improving Communication Efficiency

@article{Konecn2016FederatedLS,
  title={Federated Learning: Strategies for Improving Communication Efficiency},
  author={Jakub Konecn{\'y} and H. B. McMahan and Felix X. Yu and Peter Richt{\'a}rik and Ananda Theertha Suresh and Dave Bacon},
  journal={ArXiv},
  year={2016},
  volume={abs/1610.05492}
}
Federated Learning is a machine learning setting where the goal is to train a high-quality centralized model while training data remains distributed over a large number of clients each with unreliable and relatively slow network connections. [] Key Method In this paper, we propose two ways to reduce the uplink communication costs: structured updates, where we directly learn an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched…

Figures from this paper

FEDZIP: A Compression Framework for Communication-Efficient Federated Learning

This work proposes a novel framework, FedZip, that significantly decreases the size of updates while transferring weights from the deep learning model between clients and their servers, and outperforms state-of-the-art compression frameworks and reaches compression rates up to 1085×.

Adaptive Federated Dropout: Improving Communication Efficiency and Generalization for Federated Learning

Adaptive Federated Dropout (AFD) is proposed and studied, a novel technique to reduce the communication costs associated with federated learning that optimizes both server-client communications and computation costs by allowing clients to train locally on a selected subset of the global model.

FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization

FedPAQ is presented, a communication-efficient Federated Learning method with Periodic Averaging and Quantization that achieves near-optimal theoretical guarantees for strongly convex and non-convex loss functions and empirically demonstrate the communication-computation tradeoff provided by the method.

Federated Learning with Quantization Constraints

This work identifies the unique characteristics associated with conveying trained models over rate-constrained channels, and characterize a suitable quantization scheme for such setups, and shows that combining universal vector quantization methods with FL yields a decentralized training system, which is both efficient and feasible.

Sparse Random Networks for Communication-Efficient Federated Learning

This work proposes a radically different approach that does not update the weights at all, and freezes the weight updates at their initial \emph{random} values and learns how to sparsify the random network for the best performance.

Intrinisic Gradient Compression for Federated Learning

This paper uses a correspondence between the notion of intrinsic dimension and gradient compressibility to derive a family of low-bandwidth optimization algorithms, which they are called intrinsic gradient compression algorithms.

Communication-Efficient Federated Learning with Binary Neural Networks

A novel FL framework of training BNN is introduced, where the clients only upload the binary parameters to the server, and a novel parameter updating scheme based on the Maximum Likelihood (ML) estimation that preserves the performance of the BNN even without the availability of aggregated real-valued auxiliary parameters.

Hierarchical Quantized Federated Learning: Convergence Analysis and System Design

This paper considers a Hierarchical Quantized Federated Learning (HQFL) system with one cloud server, several edge servers and many clients, adopting a communication-efficient training algorithm, Hier-Local-QSGD, and finds that given a latency budget for the whole training process, there is an optimal parameter choice.

Federated Learning with Heterogeneous Quantization

This paper proposes FEDHQ: Federated Learning with Heterogeneous Quantization, a federated learning system with performance advantages over conventional FEDAVG with standard equal weights and a heuristic scheme which assigns weights linearly proportional to the clients’ quantization precision.

Communication-Efficient Federated Learning via Optimal Client Sampling

This work proposes a novel, simple and efficient way of updating the central model in communication-constrained settings by determining the optimal client sampling policy by modeling the progression of clients’ weights by an Ornstein-Uhlenbeck process.
...

References

SHOWING 1-10 OF 26 REFERENCES

Federated Optimization: Distributed Machine Learning for On-Device Intelligence

We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number

Federated Learning of Deep Networks using Model Averaging

This work presents a practical method for the federated learning of deep networks that proves robust to the unbalanced and non-IID data distributions that naturally arise, and allows high-quality models to be trained in relatively few rounds of communication.

Communication-Efficient Learning of Deep Networks from Decentralized Data

This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.

Federated Optimization: Distributed Optimization Beyond the Datacenter

We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are distributed (unevenly) over an extremely large

Large Scale Distributed Deep Networks

This paper considers the problem of training a deep network with billions of parameters using tens of thousands of CPU cores and develops two algorithms for large-scale distributed training, Downpour SGD and Sandblaster L-BFGS, which increase the scale and speed of deep network training.

Revisiting Distributed Synchronous SGD

It is demonstrated that a third approach, synchronous optimization with backup workers, can avoid asynchronous noise while mitigating for the worst stragglers and is empirically validated and shown to converge faster and to better test accuracies.

Quantized incremental algorithms for distributed optimization

  • M. RabbatR. Nowak
  • Computer Science
    IEEE Journal on Selected Areas in Communications
  • 2005
The main conclusion is that as the number of sensors in the network grows, in-network processing will always use less energy than a centralized algorithm, while maintaining a desired level of accuracy.

Adding vs. Averaging in Distributed Primal-Dual Optimization

A novel generalization of the recent communication-efficient primal-dual framework (COCOA) for distributed optimization, which allows for additive combination of local updates to the global parameters at each iteration, whereas previous schemes with convergence guarantees only allow conservative averaging.

AIDE: Fast and Communication Efficient Distributed Optimization

An accelerated variant of the DANE algorithm, called AIDE, is proposed that not only matches the communication lower bounds but can also be implemented using a purely first-order oracle.

Distributed optimization with arbitrary local solvers

This work presents a framework for distributed optimization that both allows the flexibility of arbitrary solvers to be used on each (single) machine locally and yet maintains competitive performance against other state-of-the-art special-purpose distributed methods.