Corpus ID: 236428271

Accelerating Federated Edge Learning via Optimized Probabilistic Device Scheduling

@article{Zhang2021AcceleratingFE,
  title={Accelerating Federated Edge Learning via Optimized Probabilistic Device Scheduling},
  author={Maojun Zhang and Guangxu Zhu and Shuai Wang and Jiamo Jiang and Caijun Zhong and Shuguang Cui},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.11588}
}
  • Maojun Zhang, Guangxu Zhu, +3 authors Shuguang Cui
  • Published 2021
  • Computer Science, Engineering, Mathematics
  • ArXiv
The popular federated edge learning (FEEL) framework allows privacy-preserving collaborative model training via frequent learning-updates exchange between edge devices and server. Due to the constrained bandwidth, only a subset of devices can upload their updates at each communication round. This has led to an active research area in FEEL studying the optimal device scheduling policy for minimizing communication time. However, owing to the difficulty in quantifying the exact communication time… Expand
1 Citations

Figures from this paper

Unit-Modulus Wireless Federated Learning Via Penalty Alternating Minimization
TLDR
Experimental results in the Car Learning to Act (CARLA) platform show that the proposed UMWFL framework with PAM algorithm achieves smaller training losses and testing errors than those of the benchmark scheme. Expand

References

SHOWING 1-10 OF 16 REFERENCES
Convergence of Update Aware Device Scheduling for Federated Learning at the Wireless Edge
TLDR
This work designs novel scheduling and resource allocation policies that decide on the subset of the devices to transmit at each round, and how the resources should be allocated among the participating devices, not only based on their channel conditions, but also on the significance of their local model updates. Expand
Adaptive Federated Learning in Resource Constrained Edge Computing Systems
TLDR
This paper analyzes the convergence bound of distributed gradient descent from a theoretical point of view, and proposes a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. Expand
Broadband Analog Aggregation for Low-Latency Federated Edge Learning
TLDR
This work designs a low-latency multi-access scheme for edge learning based on a popular privacy-preserving framework, federated edge learning (FEEL), and derives two tradeoffs between communication-and-learning metrics, which are useful for network planning and optimization. Expand
Scheduling Policies for Federated Learning in Wireless Networks
TLDR
An analytical model is developed to characterize the performance of federated learning in wireless networks and shows that running FL with PF outperforms RS and RR if the network is operating under a high signal-to-interference-plus-noise ratio (SINR) threshold, while RR is more preferable when the SINR threshold is low. Expand
Optimal Client Sampling for Federated Learning
TLDR
A novel client subsampling scheme where the number of clients allowed to communicate their updates back to the master node is restricted and a simple algorithm is provided that approximates the optimal formula for client participation which only requires secure aggregation and thus does not compromise client privacy. Expand
Communication-Efficient Learning of Deep Networks from Decentralized Data
TLDR
This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets. Expand
Optimal Importance Sampling for Federated Learning
  • Elsa Rizk, Stefan Vlaski, A. Sayed
  • Computer Science
  • ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2021
TLDR
This work derives optimal importance sampling strategies for both agent and data selection and shows that under convexity and Lipschitz assumptions, non-uniform sampling without replacement improves the performance of the original FedAvg algorithm. Expand
Toward an Intelligent Edge: Wireless Communication Meets Machine Learning
TLDR
A new set of design guidelines for wireless communication in edge learning, collectively called learning- driven communication is advocated, which crosses and revolutionizes two disciplines: wireless communication and machine learning. Expand
Distributed Dynamic Map Fusion via Federated Learning for Intelligent Networked Vehicles
TLDR
A federated learning (FL) based dynamic map fusion framework to achieve high map quality despite unknown numbers of objects in fields of view (FoVs), various sensing and model uncertainties, and missing data labels for online learning is proposed. Expand
Clearing the Jungle of Stochastic Optimization
TLDR
This article places a variety of competing strategies into a common framework, which makes it easier to see the close relationship between communities such as stochastic programming, (approximate) dynamic programming, simulation, and Stochastic search. Expand
...
1
2
...