# Multi-Stage Hybrid Federated Learning Over Large-Scale D2D-Enabled Fog Networks

@article{Hosseinalipour2020MultiStageHF,
title={Multi-Stage Hybrid Federated Learning Over Large-Scale D2D-Enabled Fog Networks},
author={Seyyedali Hosseinalipour and Sheikh Shams Azam and Christopher G. Brinton and Nicol{\o} Michelusi and Vaneet Aggarwal and David James Love and Huaiyu Dai},
journal={IEEE/ACM Transactions on Networking},
year={2020},
volume={30},
pages={1569-1584}
}`
• Published 18 July 2020
• Computer Science
• IEEE/ACM Transactions on Networking
Federated learning has generated significant interest, with nearly all works focused on a “star” topology where nodes/devices are each connected to a central server. We migrate away from this architecture and extend it through the <italic>network</italic> dimension to the case where there are multiple layers of nodes between the end devices and the server. Specifically, we develop multi-stage hybrid federated learning (<monospace>MH-FL</monospace>), a hybrid of intra-and inter-layer model…
29 Citations
• Computer Science
IEEE Communications Magazine
• 2020
Fog learning enhances federated learning along three major dimensions: network, heterogeneity, and proximity, which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers.
• Computer Science
IEEE Journal on Selected Areas in Communications
• 2021
An adaptive control algorithm is developed that tunes the step size, D2D communication rounds, and global aggregation period of TT-HF over time to target a sublinear convergence rate of <inline-formula> <tex-math notation="LaTeX">$\mathcal {O}(1/t)$ </tex- math></inline- formula> while minimizing network resource utilization.
• Computer Science
IEEE Transactions on Wireless Communications
• 2022
An efficient FL algorithm based on Federated Averaging is proposed to perform the local aggregation of gradient parameters at fog servers and global training update at the cloud and it is shown that the proposed co-design of FL and communication is essential to substantially improve resource utilization while achieving comparable accuracy of the learning model.
• Computer Science
IEEE INFOCOM 2021 - IEEE Conference on Computer Communications
• 2021
A sampling methodology based on graph convolutional networks (GCNs) which learns the relationship between network attributes, sampled nodes, and resulting offloading that maximizes FedL accuracy is developed.
• Computer Science
IEEE Transactions on Network and Service Management
• 2022
This work investigates training machine learning (ML) across a set of geo-distributed, resource-constrained clusters of devices through unmanned aerial vehicles (UAV) swarms and proposes network-aware HN-PFL, where UAVs inside swarms are distributed to optimize energy consumption and ML model performance with performance guarantees.
• Computer Science
2022 IEEE International Conference on Communications Workshops (ICC Workshops)
• 2022
This work theoretically characterize the convergence behavior of StoFedDelAv and obtain the optimal combiner weights, which consider the global model delay and expected local gradient error at each device, and formulate a network-aware optimization problem which tunes the minibatch sizes of the devices to jointly minimize energy consumption and machine learning training loss.
• Computer Science
ArXiv
• 2021
The recent work on resource management at the edge is described and problems such as the discovery of resources, deployment, load balancing, migration, and energy management will be discussed.
• Computer Science
2021 IEEE International Conference on Pervasive Computing and Communications (PerCom)
• 2021
This paper defines a new approach, opportunistic federated learning, in which individual devices belonging to different users seek to learn robust models that are personalized to their user’s own experiences, and develops a framework that supports encounter-based pairwise collaborative learning.
• Computer Science
IEEE Journal on Selected Areas in Communications
• 2021
This paper provides a holistic set of guidelines on how to deploy a broad range of distributed learning frameworks over real-world wireless communication networks, including federated learning, federated distillation, distributed inference, and multi-agent reinforcement learning.
• Computer Science
ArXiv
• 2022
It is claimed that to support the constantly growing requirements of intelligent applications in the device-edge-cloud computing continuum, resource orchestration needs to embrace edge AI and emphasize local autonomy and intelligence.

## References

SHOWING 1-10 OF 64 REFERENCES

• Computer Science
IEEE Communications Magazine
• 2020
Fog learning enhances federated learning along three major dimensions: network, heterogeneity, and proximity, which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers.
• Lumin LiuJun Zhang
• Computer Science
ICC 2020 - 2020 IEEE International Conference on Communications (ICC)
• 2020
It is shown that by introducing the intermediate edge servers, the model training time and the energy consumption of the end devices can be simultaneously reduced compared to cloud-based Federated Learning.
• Computer Science
IEEE Internet of Things Journal
• 2020
A fully distributed (or serverless) learning approach that leverages the cooperation of devices that perform data operations inside the network by iterating local computations and mutual interactions via consensus-based methods.
• Computer Science
ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
• 2020
Small cell base stations are introduced orchestrating FEEL among MUs within their cells, and periodically exchanging model updates with the MBS for global consensus, and it is shown that this hierarchical federated learning (HFL) scheme significantly reduces the communication latency without sacrificing the accuracy.
• Computer Science
IEEE Access
• 2020
A selective model aggregation approach is proposed, where “fine” local DNN models are selected and sent to the central server by evaluating the local image quality and computation capability, and demonstrated to outperform the original federated averaging approach in terms of accuracy and efficiency.
• Computer Science
IEEE Journal on Selected Areas in Communications
• 2019
This paper analyzes the convergence bound of distributed gradient descent from a theoretical point of view, and proposes a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
• Computer Science
IEEE INFOCOM 2020 - IEEE Conference on Computer Communications
• 2020
This work analytically characterize the optimal data transfer solution for different fog network topologies, showing for example that the value of a device offloading is approximately linear in the range of computing costs in the network.
• Computer Science
ArXiv
• 2019
A segmented gossip approach is proposed, which not only makes full utilization of node-to-node bandwidth, but also has good training convergence, and the experimental results show that even the training time can be highly reduced as compared to centralized federated learning.
• Computer Science
IEEE Transactions on Wireless Communications
• 2020
This work designs a low-latency multi-access scheme for edge learning based on a popular privacy-preserving framework, federated edge learning (FEEL), and derives two tradeoffs between communication-and-learning metrics, which are useful for network planning and optimization.
• Computer Science
IEEE Journal on Selected Areas in Communications
• 2021
A fast-convergent federated learning algorithm, called <inline-formula>, which performs intelligent sampling of devices in each round of model training to optimize the expected convergence speed and experimentally show its improvement in trained model accuracy, convergence speed, and/or model stability across various machine learning tasks and datasets.