Helios: Heterogeneity-Aware Federated Learning with Dynamically Balanced Collaboration

  title={Helios: Heterogeneity-Aware Federated Learning with Dynamically Balanced Collaboration},
  author={Zirui Xu and Fuxun Yu and Jinjun Xiong and Xiang Chen},
  journal={2021 58th ACM/IEEE Design Automation Conference (DAC)},
  • Zirui XuFuxun Yu Xiang Chen
  • Published 5 December 2021
  • Computer Science
  • 2021 58th ACM/IEEE Design Automation Conference (DAC)
As Federated Learning (FL) has been widely used for collaborative training, a considerable computational straggler issue emerged: when FL deploys identical neural network models to heterogeneous devices, the ones with weak computational capacities, referred to as stragglers, may significantly delay the synchronous parameter aggregation. Although discarding stragglers from the collaboration can relieve this issue to a certain extent, stragglers may keep unique and critical information learned… 

GitFL: Adaptive Asynchronous Federated Learning using Version Control

This paper proposes a novel asynchronous FL framework named GitFL, whose implementation is inspired by the famous version control system Git, and enables both effective control of model staleness and adaptive load balance of versioned models among straggling devices, thus avoiding the performance deterioration.

CoCo-FL: Communication- and Computation-Aware Federated Learning via Partial NN Freezing and Quantization

This work presents a novel FL technique, CoCoFL, which maintains the full NN structure on all devices and efficiently utilizes the available resources on devices and allows constrained devices to make a significant contribution to the FL system, increasing fairness among participants (accuracy parity) and significantly improving the final accuracy of the model.

Reducing Impacts of System Heterogeneity in Federated Learning using Weight Update Magnitudes

This work aims to mitigate the performance bottleneck of federated learning by dynamically forming sub-models for stragglers based on their performance and accuracy feedback, and offers the Invariant Dropout, a dynamic technique that forms a sub-model based on the neuron update threshold.

HetVis: A Visual Analysis Approach for Identifying Data Heterogeneity in Horizontal Federated Learning

A visual analytics tool, HetVis, is developed for participating clients to explore data heterogeneity through comparing prediction behaviors of the global federated model and the stand-alone model trained with local data.

FedAdapt: Adaptive Offloading for IoT Devices in Federated Learning

FedAdapt accelerates local training in computationally constrained devices by leveraging layer offloading of deep neural networks (DNNs) to servers and adopts reinforcement learning-based optimization and clustering to adaptively identify which layers should be offloaded for each individual device on to a server to tackle the challenges of computational heterogeneity and changing network bandwidth.

FedLess: Secure and Scalable Federated Learning Using Serverless Computing

FedLess is the first to enable FL across a large fabric of heterogeneous FaaS providers while providing important features like security and Differential Privacy and demonstrates the practical viability of the methodology by comparing it against a traditional FL system and showing that it can be cheaper and more resource-efficient.

FLRA: A Reference Architecture for Federated Learning Systems

FLRA is proposed, a reference architecture for federated learning systems, which provides a template design for Federated learning-based solutions and can serve as a design guideline to assist architects and developers with practical solutions for their problems, which can be further customised.

Networking Systems of AI: On the Convergence of Computing and Communications

This article aims to provide a comprehensive survey on the system architecture, key technologies, application scenarios, challenges, and opportunities of NSAI, which can shed light on the future developments of both telecommunications and AI computing.