Corpus ID: 236428457

Decentralized Federated Learning: Balancing Communication and Computing Costs

@article{Liu2021DecentralizedFL,
  title={Decentralized Federated Learning: Balancing Communication and Computing Costs},
  author={Wei Liu and Li Chen and Wenyi Zhang},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.12048}
}
  • Wei Liu, Li Chen, Wenyi Zhang
  • Published 2021
  • Computer Science
  • ArXiv
Decentralized federated learning (DFL) is a powerful framework of distributed machine learning and decentralized stochastic gradient descent (SGD) is a driving engine for DFL. The performance of decentralized SGD is jointly influenced by communication-efficiency and convergence rate. In this paper, we propose a general decentralized federated learning framework to strike a balance between communication-efficiency and convergence performance. The proposed framework performs both multiple local… Expand

References

SHOWING 1-10 OF 41 REFERENCES
Accelerating Federated Learning via Momentum Gradient Descent
TLDR
This article considers momentum term which relates to the last iteration of FL, which establishes global convergence properties of MFL and derive an upper bound on MFL convergence rate, and provides conditions in which MFL accelerates the convergence. Expand
Robust Federated Learning With Noisy Communication
TLDR
This paper proposes a robust design for federated learning to decline the effect of noise and utilizes the sampling-based successive convex approximation algorithm to develop a feasible training scheme to tackle the unavailable maxima or minima noise condition and the non-convex issue of the objective function. Expand
Decentralized Federated Learning: A Segmented Gossip Approach
TLDR
A segmented gossip approach is proposed, which not only makes full utilization of node-to-node bandwidth, but also has good training convergence, and the experimental results show that even the training time can be highly reduced as compared to centralized federated learning. Expand
Collaborative Deep Learning in Fixed Topology Networks
TLDR
This paper presents a new consensus-based distributed SGD (CDSGD) (and its momentum variant, CDMSGD) algorithm for collaborative deep learning over fixed topology networks that enables data parallelization as well as decentralized computation. Expand
Optimizing Federated Learning on Non-IID Data with Reinforcement Learning
TLDR
Favor, an experience-driven control framework that intelligently chooses the client devices to participate in each round of federated learning to counterbalance the bias introduced by non-IID data and to speed up convergence is proposed. Expand
Adaptive Federated Learning in Resource Constrained Edge Computing Systems
TLDR
This paper analyzes the convergence bound of distributed gradient descent from a theoretical point of view, and proposes a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. Expand
On the Convergence of FedAvg on Non-IID Data
TLDR
This paper analyzes the convergence of Federated Averaging on non-iid data and establishes a convergence rate of $\mathcal{O}(\frac{1}{T})$ for strongly convex and smooth problems, where $T$ is the number of SGDs. Expand
Federated Optimization in Heterogeneous Networks
TLDR
This work introduces a framework, FedProx, to tackle heterogeneity in federated networks, and provides convergence guarantees for this framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work. Expand
Federated Learning: Strategies for Improving Communication Efficiency
TLDR
Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling. Expand
Robust and Communication-Efficient Federated Learning From Non-i.i.d. Data
TLDR
Sparse ternary compression (STC) is proposed, a new compression framework that is specifically designed to meet the requirements of the federated learning environment and advocate for a paradigm shift in federated optimization toward high-frequency low-bitwidth communication, in particular in the bandwidth-constrained learning environments. Expand
...
1
2
3
4
5
...