Multi-Stage Hybrid Federated Learning Over Large-Scale D2D-Enabled Fog Networks

  title={Multi-Stage Hybrid Federated Learning Over Large-Scale D2D-Enabled Fog Networks},
  author={Seyyedali Hosseinalipour and Sheikh Shams Azam and Christopher G. Brinton and Nicol{\`o} Michelusi and Vaneet Aggarwal and David James Love and Huaiyu Dai},
  journal={IEEE/ACM Transactions on Networking},
Federated learning has generated significant interest, with nearly all works focused on a “star” topology where nodes/devices are each connected to a central server. We migrate away from this architecture and extend it through the <italic>network</italic> dimension to the case where there are multiple layers of nodes between the end devices and the server. Specifically, we develop multi-stage hybrid federated learning (<monospace>MH-FL</monospace>), a hybrid of intra-and inter-layer model… 

From Federated to Fog Learning: Distributed Machine Learning over Heterogeneous Wireless Networks

Fog learning enhances federated learning along three major dimensions: network, heterogeneity, and proximity, which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers.

Semi-Decentralized Federated Learning With Cooperative D2D Local Model Aggregations

An adaptive control algorithm is developed that tunes the step size, D2D communication rounds, and global aggregation period of TT-HF over time to target a sublinear convergence rate of <inline-formula> <tex-math notation="LaTeX">$\mathcal {O}(1/t)$ </tex- math></inline- formula> while minimizing network resource utilization.

FedFog: Network-Aware Optimization of Federated Learning over Wireless Fog-Cloud Systems

An efficient FL algorithm based on Federated Averaging is proposed to perform the local aggregation of gradient parameters at fog servers and global training update at the cloud and it is shown that the proposed co-design of FL and communication is essential to substantially improve resource utilization while achieving comparable accuracy of the learning model.

Device Sampling for Heterogeneous Federated Learning: Theory, Algorithms, and Implementation

A sampling methodology based on graph convolutional networks (GCNs) which learns the relationship between network attributes, sampled nodes, and resulting offloading that maximizes FedL accuracy is developed.

UAV-assisted Online Machine Learning over Multi-Tiered Networks: A Hierarchical Nested Personalized Federated Learning Approach

This work investigates training machine learning (ML) across a set of geo-distributed, resource-constrained clusters of devices through unmanned aerial vehicles (UAV) swarms and proposes network-aware HN-PFL, where UAVs inside swarms are distributed to optimize energy consumption and ML model performance with performance guarantees.

Resource-Efficient and Delay-Aware Federated Learning Design under Edge Heterogeneity

This work theoretically characterize the convergence behavior of StoFedDelAv and obtain the optimal combiner weights, which consider the global model delay and expected local gradient error at each device, and formulate a network-aware optimization problem which tunes the minibatch sizes of the devices to jointly minimize energy consumption and machine learning training loss.

Management of Resource at the Network Edge for Federated Learning

The recent work on resource management at the edge is described and problems such as the discovery of resources, deployment, load balancing, migration, and energy management will be discussed.

Opportunistic Federated Learning: An Exploration of Egocentric Collaboration for Pervasive Computing Applications

This paper defines a new approach, opportunistic federated learning, in which individual devices belonging to different users seek to learn robust models that are personalized to their user’s own experiences, and develops a framework that supports encounter-based pairwise collaborative learning.

Distributed Learning in Wireless Networks: Recent Progress and Future Challenges

This paper provides a holistic set of guidelines on how to deploy a broad range of distributed learning frameworks over real-world wireless communication networks, including federated learning, federated distillation, distributed inference, and multi-agent reinforcement learning.

Autonomy and Intelligence in the Computing Continuum: Challenges, Enablers, and Future Directions for Orchestration

It is claimed that to support the constantly growing requirements of intelligent applications in the device-edge-cloud computing continuum, resource orchestration needs to embrace edge AI and emphasize local autonomy and intelligence.



From Federated to Fog Learning: Distributed Machine Learning over Heterogeneous Wireless Networks

Fog learning enhances federated learning along three major dimensions: network, heterogeneity, and proximity, which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers.

Client-Edge-Cloud Hierarchical Federated Learning

It is shown that by introducing the intermediate edge servers, the model training time and the energy consumption of the end devices can be simultaneously reduced compared to cloud-based Federated Learning.

Federated Learning With Cooperating Devices: A Consensus Approach for Massive IoT Networks

A fully distributed (or serverless) learning approach that leverages the cooperation of devices that perform data operations inside the network by iterating local computations and mutual interactions via consensus-based methods.

Hierarchical Federated Learning ACROSS Heterogeneous Cellular Networks

Small cell base stations are introduced orchestrating FEEL among MUs within their cells, and periodically exchanging model updates with the MBS for global consensus, and it is shown that this hierarchical federated learning (HFL) scheme significantly reduces the communication latency without sacrificing the accuracy.

Federated Learning in Vehicular Edge Computing: A Selective Model Aggregation Approach

A selective model aggregation approach is proposed, where “fine” local DNN models are selected and sent to the central server by evaluating the local image quality and computation capability, and demonstrated to outperform the original federated averaging approach in terms of accuracy and efficiency.

Adaptive Federated Learning in Resource Constrained Edge Computing Systems

This paper analyzes the convergence bound of distributed gradient descent from a theoretical point of view, and proposes a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.

Network-Aware Optimization of Distributed Learning for Fog Computing

This work analytically characterize the optimal data transfer solution for different fog network topologies, showing for example that the value of a device offloading is approximately linear in the range of computing costs in the network.

Decentralized Federated Learning: A Segmented Gossip Approach

A segmented gossip approach is proposed, which not only makes full utilization of node-to-node bandwidth, but also has good training convergence, and the experimental results show that even the training time can be highly reduced as compared to centralized federated learning.

Broadband Analog Aggregation for Low-Latency Federated Edge Learning

This work designs a low-latency multi-access scheme for edge learning based on a popular privacy-preserving framework, federated edge learning (FEEL), and derives two tradeoffs between communication-and-learning metrics, which are useful for network planning and optimization.

Fast-Convergent Federated Learning

A fast-convergent federated learning algorithm, called <inline-formula>, which performs intelligent sampling of devices in each round of model training to optimize the expected convergence speed and experimentally show its improvement in trained model accuracy, convergence speed, and/or model stability across various machine learning tasks and datasets.