Corpus ID: 236429032

Device Scheduling and Update Aggregation Policies for Asynchronous Federated Learning

@article{Hu2021DeviceSA,
  title={Device Scheduling and Update Aggregation Policies for Asynchronous Federated Learning},
  author={Chung-Hsuan Hu and Zheng Chen and Erik G. Larsson},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.11415}
}
Federated Learning (FL) is a newly emerged decentralized machine learning (ML) framework that combines ondevice local training with server-based model synchronization to train a centralized ML model over distributed nodes. In this paper, we propose an asynchronous FL framework with periodic aggregation to eliminate the straggler issue in FL systems. For the proposed model, we investigate several device scheduling and update aggregation policies and compare their performances when the devices… Expand

Figures and Tables from this paper

Asynchronous Federated Learning on Heterogeneous Devices: A Survey
  • Chenhao Xu, Youyang Qu, Yong Xiang, Longxiang Gao
  • Computer Science
  • ArXiv
  • 2021
TLDR
This survey comprehensively analyzes and summarizes existing variants of AFL according to a novel classification mechanism, including device heterogeneity, data heterogeneity, privacy and security on heterogeneous devices, and applications on heterogeneity devices. Expand

References

SHOWING 1-10 OF 17 REFERENCES
Age-Based Scheduling Policy for Federated Learning in Mobile Edge Networks
TLDR
This paper proposes a scheduling policy by jointly accounting for the staleness of the received parameters and the instantaneous channel qualities to improve the running efficiency of FL. Expand
FedAT: A Communication-Efficient Federated Learning Method with Asynchronous Tiers under Non-IID Data
TLDR
FedAT synergistically combines synchronous intra-tier training and asynchronous cross-tierTraining through tiering, which minimizes the straggler effect with improved convergence speed and test accuracy and compresses the uplink and downlink communications using an efficient, polyline-encoding-based compression algorithm. Expand
Scheduling Policies for Federated Learning in Wireless Networks
TLDR
An analytical model is developed to characterize the performance of federated learning in wireless networks and shows that running FL with PF outperforms RS and RR if the network is operating under a high signal-to-interference-plus-noise ratio (SINR) threshold, while RR is more preferable when the SINR threshold is low. Expand
Convergence of Update Aware Device Scheduling for Federated Learning at the Wireless Edge
TLDR
This work designs novel scheduling and resource allocation policies that decide on the subset of the devices to transmit at each round, and how the resources should be allocated among the participating devices, not only based on their channel conditions, but also on the significance of their local model updates. Expand
Federated Learning in Unreliable and Resource-Constrained Cellular Wireless Networks
TLDR
This paper proposes a federated learning algorithm that is suitable for cellular wireless networks, and proves its convergence, and provides a sub-optimal scheduling policy that improves the convergence rate. Expand
Federated Optimization: Distributed Machine Learning for On-Device Intelligence
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large numberExpand
Asynchronous Federated Optimization
TLDR
It is proved that the proposed asynchronous federated optimization algorithm has near-linear convergence to a global optimum, for both strongly and non-strongly convex problems, as well as a restricted family of non-convex problems. Expand
Federated Learning: A Signal Processing Perspective
TLDR
This article presents a formulation for the federated learning paradigm from a signal processing perspective, and surveys a set of candidate approaches for tackling its unique challenges, and provides guidelines for the design and adaptation of signal processing and communication methods to facilitate Federated learning at large scale. Expand
On the Convergence of FedAvg on Non-IID Data
TLDR
This paper analyzes the convergence of Federated Averaging on non-iid data and establishes a convergence rate of $\mathcal{O}(\frac{1}{T})$ for strongly convex and smooth problems, where $T$ is the number of SGDs. Expand
Staleness-Aware Async-SGD for Distributed Deep Learning
TLDR
This paper proposes a variant of the ASGD algorithm in which the learning rate is modulated according to the gradient staleness and provides theoretical guarantees for convergence of this algorithm. Expand
...
1
2
...