Corpus ID: 236429032

Device Scheduling and Update Aggregation Policies for Asynchronous Federated Learning

  title={Device Scheduling and Update Aggregation Policies for Asynchronous Federated Learning},
  author={Chung-Hsuan Hu and Zheng Chen and Erik G. Larsson},
Federated Learning (FL) is a newly emerged decentralized machine learning (ML) framework that combines ondevice local training with server-based model synchronization to train a centralized ML model over distributed nodes. In this paper, we propose an asynchronous FL framework with periodic aggregation to eliminate the straggler issue in FL systems. For the proposed model, we investigate several device scheduling and update aggregation policies and compare their performances when the devices… Expand

Figures and Tables from this paper

Asynchronous Federated Learning on Heterogeneous Devices: A Survey
  • Chenhao Xu, Youyang Qu, Yong Xiang, Longxiang Gao
  • Computer Science
  • ArXiv
  • 2021
This survey comprehensively analyzes and summarizes existing variants of AFL according to a novel classification mechanism, including device heterogeneity, data heterogeneity, privacy and security on heterogeneous devices, and applications on heterogeneity devices. Expand


Age-Based Scheduling Policy for Federated Learning in Mobile Edge Networks
This paper proposes a scheduling policy by jointly accounting for the staleness of the received parameters and the instantaneous channel qualities to improve the running efficiency of FL. Expand
FedAT: A Communication-Efficient Federated Learning Method with Asynchronous Tiers under Non-IID Data
FedAT synergistically combines synchronous intra-tier training and asynchronous cross-tierTraining through tiering, which minimizes the straggler effect with improved convergence speed and test accuracy and compresses the uplink and downlink communications using an efficient, polyline-encoding-based compression algorithm. Expand
Scheduling Policies for Federated Learning in Wireless Networks
An analytical model is developed to characterize the performance of federated learning in wireless networks and shows that running FL with PF outperforms RS and RR if the network is operating under a high signal-to-interference-plus-noise ratio (SINR) threshold, while RR is more preferable when the SINR threshold is low. Expand
Convergence of Update Aware Device Scheduling for Federated Learning at the Wireless Edge
This work designs novel scheduling and resource allocation policies that decide on the subset of the devices to transmit at each round, and how the resources should be allocated among the participating devices, not only based on their channel conditions, but also on the significance of their local model updates. Expand
Federated Learning in Unreliable and Resource-Constrained Cellular Wireless Networks
This paper proposes a federated learning algorithm that is suitable for cellular wireless networks, and proves its convergence, and provides a sub-optimal scheduling policy that improves the convergence rate. Expand
Federated Optimization: Distributed Machine Learning for On-Device Intelligence
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large numberExpand
Asynchronous Federated Optimization
It is proved that the proposed asynchronous federated optimization algorithm has near-linear convergence to a global optimum, for both strongly and non-strongly convex problems, as well as a restricted family of non-convex problems. Expand
Federated Learning: A Signal Processing Perspective
This article presents a formulation for the federated learning paradigm from a signal processing perspective, and surveys a set of candidate approaches for tackling its unique challenges, and provides guidelines for the design and adaptation of signal processing and communication methods to facilitate Federated learning at large scale. Expand
On the Convergence of FedAvg on Non-IID Data
This paper analyzes the convergence of Federated Averaging on non-iid data and establishes a convergence rate of $\mathcal{O}(\frac{1}{T})$ for strongly convex and smooth problems, where $T$ is the number of SGDs. Expand
Staleness-Aware Async-SGD for Distributed Deep Learning
This paper proposes a variant of the ASGD algorithm in which the learning rate is modulated according to the gradient staleness and provides theoretical guarantees for convergence of this algorithm. Expand