Multi-Stage Hybrid Federated Learning Over Large-Scale D2D-Enabled Fog Networks
@article{Hosseinalipour2020MultiStageHF, title={Multi-Stage Hybrid Federated Learning Over Large-Scale D2D-Enabled Fog Networks}, author={Seyyedali Hosseinalipour and Sheikh Shams Azam and Christopher G. Brinton and Nicol{\`o} Michelusi and Vaneet Aggarwal and David James Love and Huaiyu Dai}, journal={IEEE/ACM Transactions on Networking}, year={2020}, volume={30}, pages={1569-1584} }
Federated learning has generated significant interest, with nearly all works focused on a “star” topology where nodes/devices are each connected to a central server. We migrate away from this architecture and extend it through the <italic>network</italic> dimension to the case where there are multiple layers of nodes between the end devices and the server. Specifically, we develop multi-stage hybrid federated learning (<monospace>MH-FL</monospace>), a hybrid of intra-and inter-layer model…
Figures and Tables from this paper
figure 1 figure 10 figure 11 figure 12 figure 13 figure 14 figure 15 figure 16 figure 18 figure 2 figure 20 figure 21 figure 22 figure 23 figure 26 figure 28 figure 29 figure 3 figure 31 figure 33 figure 34 figure 37 figure 38 figure 4 figure 40 figure 42 figure 43 figure 44 figure 45 figure 46 figure 47 figure 48 figure 49 figure 5 figure 51 figure 53 figure 55 figure 56 figure 57 figure 59 figure 6 figure 61 figure 63 figure 7 figure 8 figure 9 table I
29 Citations
From Federated to Fog Learning: Distributed Machine Learning over Heterogeneous Wireless Networks
- Computer ScienceIEEE Communications Magazine
- 2020
Fog learning enhances federated learning along three major dimensions: network, heterogeneity, and proximity, which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers.
Semi-Decentralized Federated Learning With Cooperative D2D Local Model Aggregations
- Computer ScienceIEEE Journal on Selected Areas in Communications
- 2021
An adaptive control algorithm is developed that tunes the step size, D2D communication rounds, and global aggregation period of TT-HF over time to target a sublinear convergence rate of <inline-formula> <tex-math notation="LaTeX">$\mathcal {O}(1/t)$ </tex- math></inline- formula> while minimizing network resource utilization.
FedFog: Network-Aware Optimization of Federated Learning over Wireless Fog-Cloud Systems
- Computer ScienceIEEE Transactions on Wireless Communications
- 2022
An efficient FL algorithm based on Federated Averaging is proposed to perform the local aggregation of gradient parameters at fog servers and global training update at the cloud and it is shown that the proposed co-design of FL and communication is essential to substantially improve resource utilization while achieving comparable accuracy of the learning model.
Device Sampling for Heterogeneous Federated Learning: Theory, Algorithms, and Implementation
- Computer ScienceIEEE INFOCOM 2021 - IEEE Conference on Computer Communications
- 2021
A sampling methodology based on graph convolutional networks (GCNs) which learns the relationship between network attributes, sampled nodes, and resulting offloading that maximizes FedL accuracy is developed.
UAV-assisted Online Machine Learning over Multi-Tiered Networks: A Hierarchical Nested Personalized Federated Learning Approach
- Computer ScienceIEEE Transactions on Network and Service Management
- 2022
This work investigates training machine learning (ML) across a set of geo-distributed, resource-constrained clusters of devices through unmanned aerial vehicles (UAV) swarms and proposes network-aware HN-PFL, where UAVs inside swarms are distributed to optimize energy consumption and ML model performance with performance guarantees.
Resource-Efficient and Delay-Aware Federated Learning Design under Edge Heterogeneity
- Computer Science2022 IEEE International Conference on Communications Workshops (ICC Workshops)
- 2022
This work theoretically characterize the convergence behavior of StoFedDelAv and obtain the optimal combiner weights, which consider the global model delay and expected local gradient error at each device, and formulate a network-aware optimization problem which tunes the minibatch sizes of the devices to jointly minimize energy consumption and machine learning training loss.
Management of Resource at the Network Edge for Federated Learning
- Computer ScienceArXiv
- 2021
The recent work on resource management at the edge is described and problems such as the discovery of resources, deployment, load balancing, migration, and energy management will be discussed.
Opportunistic Federated Learning: An Exploration of Egocentric Collaboration for Pervasive Computing Applications
- Computer Science2021 IEEE International Conference on Pervasive Computing and Communications (PerCom)
- 2021
This paper defines a new approach, opportunistic federated learning, in which individual devices belonging to different users seek to learn robust models that are personalized to their user’s own experiences, and develops a framework that supports encounter-based pairwise collaborative learning.
Distributed Learning in Wireless Networks: Recent Progress and Future Challenges
- Computer ScienceIEEE Journal on Selected Areas in Communications
- 2021
This paper provides a holistic set of guidelines on how to deploy a broad range of distributed learning frameworks over real-world wireless communication networks, including federated learning, federated distillation, distributed inference, and multi-agent reinforcement learning.
Autonomy and Intelligence in the Computing Continuum: Challenges, Enablers, and Future Directions for Orchestration
- Computer ScienceArXiv
- 2022
It is claimed that to support the constantly growing requirements of intelligent applications in the device-edge-cloud computing continuum, resource orchestration needs to embrace edge AI and emphasize local autonomy and intelligence.
References
SHOWING 1-10 OF 64 REFERENCES
From Federated to Fog Learning: Distributed Machine Learning over Heterogeneous Wireless Networks
- Computer ScienceIEEE Communications Magazine
- 2020
Fog learning enhances federated learning along three major dimensions: network, heterogeneity, and proximity, which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers.
Client-Edge-Cloud Hierarchical Federated Learning
- Computer ScienceICC 2020 - 2020 IEEE International Conference on Communications (ICC)
- 2020
It is shown that by introducing the intermediate edge servers, the model training time and the energy consumption of the end devices can be simultaneously reduced compared to cloud-based Federated Learning.
Federated Learning With Cooperating Devices: A Consensus Approach for Massive IoT Networks
- Computer ScienceIEEE Internet of Things Journal
- 2020
A fully distributed (or serverless) learning approach that leverages the cooperation of devices that perform data operations inside the network by iterating local computations and mutual interactions via consensus-based methods.
Hierarchical Federated Learning ACROSS Heterogeneous Cellular Networks
- Computer ScienceICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2020
Small cell base stations are introduced orchestrating FEEL among MUs within their cells, and periodically exchanging model updates with the MBS for global consensus, and it is shown that this hierarchical federated learning (HFL) scheme significantly reduces the communication latency without sacrificing the accuracy.
Federated Learning in Vehicular Edge Computing: A Selective Model Aggregation Approach
- Computer ScienceIEEE Access
- 2020
A selective model aggregation approach is proposed, where “fine” local DNN models are selected and sent to the central server by evaluating the local image quality and computation capability, and demonstrated to outperform the original federated averaging approach in terms of accuracy and efficiency.
Adaptive Federated Learning in Resource Constrained Edge Computing Systems
- Computer ScienceIEEE Journal on Selected Areas in Communications
- 2019
This paper analyzes the convergence bound of distributed gradient descent from a theoretical point of view, and proposes a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.
Network-Aware Optimization of Distributed Learning for Fog Computing
- Computer ScienceIEEE INFOCOM 2020 - IEEE Conference on Computer Communications
- 2020
This work analytically characterize the optimal data transfer solution for different fog network topologies, showing for example that the value of a device offloading is approximately linear in the range of computing costs in the network.
Decentralized Federated Learning: A Segmented Gossip Approach
- Computer ScienceArXiv
- 2019
A segmented gossip approach is proposed, which not only makes full utilization of node-to-node bandwidth, but also has good training convergence, and the experimental results show that even the training time can be highly reduced as compared to centralized federated learning.
Broadband Analog Aggregation for Low-Latency Federated Edge Learning
- Computer ScienceIEEE Transactions on Wireless Communications
- 2020
This work designs a low-latency multi-access scheme for edge learning based on a popular privacy-preserving framework, federated edge learning (FEEL), and derives two tradeoffs between communication-and-learning metrics, which are useful for network planning and optimization.
Fast-Convergent Federated Learning
- Computer ScienceIEEE Journal on Selected Areas in Communications
- 2021
A fast-convergent federated learning algorithm, called <inline-formula>, which performs intelligent sampling of devices in each round of model training to optimize the expected convergence speed and experimentally show its improvement in trained model accuracy, convergence speed, and/or model stability across various machine learning tasks and datasets.