Accelerating Federated Edge Learning via Optimized Probabilistic Device Scheduling

@article{Zhang2021AcceleratingFE,
  title={Accelerating Federated Edge Learning via Optimized Probabilistic Device Scheduling},
  author={Maojun Zhang and Guangxu Zhu and Shuai Wang and Jiamo Jiang and Caijun Zhong and Shuguang Cui},
  journal={2021 IEEE 22nd International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)},
  year={2021},
  pages={606-610}
}
  • Maojun Zhang, Guangxu Zhu, +3 authors Shuguang Cui
  • Published 24 July 2021
  • Computer Science, Engineering, Mathematics
  • 2021 IEEE 22nd International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)
The popular federated edge learning (FEEL) framework allows privacy-preserving collaborative model training via frequent learning-updates exchange between edge devices and server. Due to the constrained bandwidth, only a subset of devices can upload their updates at each communication round. This has led to an active research area in FEEL studying the optimal device scheduling policy for minimizing communication time. However, owing to the difficulty in quantifying the exact communication time… 
1 Citations

Figures from this paper

Unit-Modulus Wireless Federated Learning Via Penalty Alternating Minimization
TLDR
Experimental results in the Car Learning to Act (CARLA) platform show that the proposed UMWFL framework with PAM algorithm achieves smaller training losses and testing errors than those of the benchmark scheme.

References

SHOWING 1-10 OF 16 REFERENCES
Convergence of Update Aware Device Scheduling for Federated Learning at the Wireless Edge
TLDR
This work designs novel scheduling and resource allocation policies that decide on the subset of the devices to transmit at each round, and how the resources should be allocated among the participating devices, not only based on their channel conditions, but also on the significance of their local model updates.
Adaptive Federated Learning in Resource Constrained Edge Computing Systems
TLDR
This paper analyzes the convergence bound of distributed gradient descent from a theoretical point of view, and proposes a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Broadband Analog Aggregation for Low-Latency Federated Edge Learning
TLDR
This work designs a low-latency multi-access scheme for edge learning based on a popular privacy-preserving framework, federated edge learning (FEEL), and derives two tradeoffs between communication-and-learning metrics, which are useful for network planning and optimization.
Scheduling Policies for Federated Learning in Wireless Networks
TLDR
An analytical model is developed to characterize the performance of federated learning in wireless networks and shows that running FL with PF outperforms RS and RR if the network is operating under a high signal-to-interference-plus-noise ratio (SINR) threshold, while RR is more preferable when the SINR threshold is low.
Communication-Efficient Learning of Deep Networks from Decentralized Data
TLDR
This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.
Optimal Importance Sampling for Federated Learning
  • Elsa Rizk, Stefan Vlaski, A. Sayed
  • Computer Science
    ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2021
TLDR
This work derives optimal importance sampling strategies for both agent and data selection and shows that under convexity and Lipschitz assumptions, non-uniform sampling without replacement improves the performance of the original FedAvg algorithm.
Toward an Intelligent Edge: Wireless Communication Meets Machine Learning
TLDR
A new set of design guidelines for wireless communication in edge learning, collectively called learning- driven communication is advocated, which crosses and revolutionizes two disciplines: wireless communication and machine learning.
Distributed Dynamic Map Fusion via Federated Learning for Intelligent Networked Vehicles
TLDR
A federated learning (FL) based dynamic map fusion framework to achieve high map quality despite unknown numbers of objects in fields of view (FoVs), various sensing and model uncertainties, and missing data labels for online learning is proposed.
Clearing the Jungle of Stochastic Optimization
TLDR
This article places a variety of competing strategies into a common framework, which makes it easier to see the close relationship between communities such as stochastic programming, (approximate) dynamic programming, simulation, and Stochastic search.
CARLA: An Open Urban Driving Simulator
TLDR
This work introduces CARLA, an open-source simulator for autonomous driving research, and uses it to study the performance of three approaches to autonomous driving: a classic modular pipeline, an end-to-end model trained via imitation learning, and an end to-end models trained via reinforcement learning.
...
1
2
...