• Corpus ID: 239615978

Federated Learning over Wireless IoT Networks with Optimized Communication and Resources

@article{Chen2021FederatedLO,
  title={Federated Learning over Wireless IoT Networks with Optimized Communication and Resources},
  author={Hao Chen and Shaocheng Huang and Deyou Zhang and Ming Xiao and Mikael Skoglund and H. Vincent Poor},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.11775}
}
To leverage massive distributed data and computation resources, machine learning in the network edge is considered to be a promising technique especially for largescale model training. Federated learning (FL), as a paradigm of collaborative learning techniques, has obtained increasing research attention with the benefits of communication efficiency and improved data privacy. Due to the lossy communication channels and limited communication resources (e.g., bandwidth and power), it is of… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 38 REFERENCES
Efficient Federated Learning Algorithm for Resource Allocation in Wireless IoT Networks
TLDR
A convergence upper bound is provided characterizing the tradeoff between convergence rate and global rounds, showing that a small number of active UEs per round still guarantees convergence and advocating the proposed FL algorithm for a paradigm shift in bandwidth-constrained learning wireless IoT networks.
Federated Learning over Wireless Networks: Optimization Model Design and Analysis
TLDR
This work formulates a Federated Learning over wireless network as an optimization problem FEDL that captures both trade-offs and obtains the globally optimal solution by charactering the closed-form solutions to all sub-problems, which give qualitative insights to problem design via the obtained optimal FEDl learning time, accuracy level, and UE energy cost.
Communication-Efficient Federated Learning for Wireless Edge Intelligence in IoT
TLDR
This work proposes adapting FedAvg to use a distributed form of Adam optimization, greatly reducing the number of rounds to convergence, along with the novel compression techniques, to produce communication-efficient FedAvg (CE-FedAvg), which can converge to a target accuracy and is more robust to aggressive compression.
Energy-Efficient Radio Resource Allocation for Federated Edge Learning
TLDR
To reduce devices' energy consumption, this work proposes energy-efficient strategies for bandwidth allocation and scheduling that adapt to devices' channel states and computation capacities so as to reduce their sum energy consumption while warranting learning performance.
Scheduling Policies for Federated Learning in Wireless Networks
TLDR
An analytical model is developed to characterize the performance of federated learning in wireless networks and shows that running FL with PF outperforms RS and RR if the network is operating under a high signal-to-interference-plus-noise ratio (SINR) threshold, while RR is more preferable when the SINR threshold is low.
Communication-efficient federated learning
TLDR
A communication-efficient FL framework is proposed to jointly improve the FL convergence time and the training loss, and a probabilistic device selection scheme is designed such that the devices that can significantly improve the convergence speed and training loss have higher probabilities of being selected for ML model transmission.
Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge
  • T. Nishio, Ryo Yonetani
  • Computer Science
    ICC 2019 - 2019 IEEE International Conference on Communications (ICC)
  • 2019
TLDR
The new FedCS protocol, which the authors refer to as FedCS, solves a client selection problem with resource constraints, which allows the server to aggregate as many client updates as possible and to accelerate performance improvement in ML models.
Joint Device Scheduling and Resource Allocation for Latency Constrained Wireless Federated Learning
TLDR
Experiments show that the proposed joint device scheduling and resource allocation policy to maximize the model accuracy within a given total training time budget for latency constrained wireless FL outperforms state-of-the-art scheduling policies under extensive settings of data distributions and cell radius.
Update Aware Device Scheduling for Federated Learning at the Wireless Edge
TLDR
Novel scheduling policies are designed, that decide on the subset of devices to transmit at each round not only based on their channel conditions, but also on the significance of their local model updates.
Coded Computing for Low-Latency Federated Learning Over Wireless Edge Networks
TLDR
This work proposes a novel coded computing framework, CodedFedL, that injects structured coding redundancy into federated learning for mitigating stragglers and speeding up the training procedure.
...
1
2
3
4
...