Asynchronous Federated Learning for Sensor Data with Concept Drift

@article{Chen2021AsynchronousFL,
  title={Asynchronous Federated Learning for Sensor Data with Concept Drift},
  author={Yujing Chen and Zheng Chai and Yue Cheng and Huzefa Rangwala},
  journal={2021 IEEE International Conference on Big Data (Big Data)},
  year={2021},
  pages={4822-4831}
}
Federated learning (FL) involves multiple distributed devices jointly training a shared model without any of the participants having to reveal their local data to a centralized server. Most of previous FL approaches assume that data on devices are fixed and stationary during the training process. However, this assumption is unrealistic because these devices usually have varying sampling rates and different system configurations. In addition, the underlying distribution of the device data can… 

Figures and Tables from this paper

Federated Learning under Distributed Concept Drift

TLDR
This work identifies the problem of drift adaptation as a time-varying clustering problem, and proposes two new clustering algorithms for reacting to drifts based on local drift detection and hierarchical clustering.

Asynchronous Federated Learning on Heterogeneous Devices: A Survey

TLDR
This survey comprehensively analyzes and summarizes existing variants of AFL according to a novel classification mechanism, including device heterogeneity, data heterogeneity, privacy and security on heterogeneous devices, and applications on heterogeneity devices.

Resource-Aware Asynchronous Online Federated Learning for Nonlinear Regression

TLDR
The convergence of the proposed ASO-Fed is proved and it is revealed that, in the asynchronous setting, it is possible to achieve the same convergence as the federated stochastic gradient (Online-FedSGD) while reducing the communication tenfold.

Scaling distributed artificial intelligence/machine learning for decision dominance in all-domain operations

TLDR
This concept paper will analyze and compare centralized vs. distributed AI architectures in support of all-domain operations and explore key attributes and capabilities to directly impact the resiliency and adaptability of the AI, and its ability to provide insights and decision-support at a speed and scale of relevance to and to converge effects across all warfighting domains.

References

SHOWING 1-10 OF 51 REFERENCES

Asynchronous Online Federated Learning for Edge Devices with Non-IID Data

TLDR
This paper presents an Asynchronous Online Federated Learning (ASO-Fed) framework, where the edge devices perform online learning with continuous streaming local data and a central server aggregates model parameters from clients in an asynchronous manner.

FedAT: A Communication-Efficient Federated Learning Method with Asynchronous Tiers under Non-IID Data

TLDR
FedAT synergistically combines synchronous intra-tier training and asynchronous cross-tierTraining through tiering, which minimizes the straggler effect with improved convergence speed and test accuracy and compresses the uplink and downlink communications using an efficient, polyline-encoding-based compression algorithm.

Massively Distributed Concept Drift Handling in Large Networks

TLDR
Two algorithms to handle concept drift are presented and it is demonstrated through a thorough experimental analysis that these algorithms outperform the known competing methods if the number of independent local samples is limited relative to the speed of drift.

Federated Learning: Strategies for Improving Communication Efficiency

TLDR
Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling.

TiFL: A Tier-based Federated Learning System

TLDR
This work proposes TiFL, a Tier-based Federated Learning System, which divides clients into tiers based on their training performance and selects clients from the same tier in each training round to mitigate the straggler problem caused by heterogeneity in resource and data quantity.

Reacting to Different Types of Concept Drift: The Accuracy Updated Ensemble Algorithm

TLDR
A new data stream classifier, called the Accuracy Updated Ensemble (AUE2), which aims at reacting equally well to different types of drift, and combines accuracy-based weighting mechanisms known from block-based ensembles with the incremental nature of Hoeffding Trees.

Concept drift in Streaming Data Classification: Algorithms, Platforms and Issues

Federated Optimization in Heterogeneous Networks

TLDR
This work introduces a framework, FedProx, to tackle heterogeneity in federated networks, and provides convergence guarantees for this framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work.

Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge

  • T. NishioRyo Yonetani
  • Computer Science
    ICC 2019 - 2019 IEEE International Conference on Communications (ICC)
  • 2019
TLDR
The new FedCS protocol, which the authors refer to as FedCS, solves a client selection problem with resource constraints, which allows the server to aggregate as many client updates as possible and to accelerate performance improvement in ML models.

Communication-Efficient Federated Learning for Wireless Edge Intelligence in IoT

TLDR
This work proposes adapting FedAvg to use a distributed form of Adam optimization, greatly reducing the number of rounds to convergence, along with the novel compression techniques, to produce communication-efficient FedAvg (CE-FedAvg), which can converge to a target accuracy and is more robust to aggressive compression.
...