Federated Learning With Cooperating Devices: A Consensus Approach for Massive IoT Networks

@article{Savazzi2020FederatedLW,
  title={Federated Learning With Cooperating Devices: A Consensus Approach for Massive IoT Networks},
  author={Stefano Savazzi and Monica Nicoli and Vittorio Rampa},
  journal={IEEE Internet of Things Journal},
  year={2020},
  volume={7},
  pages={4641-4654}
}
Federated learning (FL) is emerging as a new paradigm to train machine learning (ML) models in distributed systems. Rather than sharing and disclosing the training data set with the server, the model parameters (e.g., neural networks’ weights and biases) are optimized collectively by large populations of interconnected devices, acting as local learners. FL can be applied to power-constrained Internet of Things (IoT) devices with slow and sporadic connections. In addition, it does not need data… 

Figures and Tables from this paper

Federated Learning with Mutually Cooperating Devices: A Consensus Approach Towards Server-Less Model Optimization

A distributed FL approach is proposed that performs a decentralized fusion of local model parameters by leveraging mutual cooperation between the devices and local data operations via consensus-based methods, laying the groundwork for integration of FL methods within future wireless networks characterized by distributed and decentralized connectivity.

Federated Learning for Internet of Things: Recent Advances, Taxonomy, and Open Challenges

The recent advances of federated learning towards enabling Federated learning-powered IoT applications are presented and a set of metrics such as sparsification, robustness, quantization, scalability, security, and privacy, is delineated in order to rigorously evaluate the recent advances.

Federated Learning for Vehicular Internet of Things: Recent Advances and Open Issues

The significance and technical challenges of applying FL in vehicular IoT, and future research directions are discussed, and a brief survey of existing studies on FL and its use in wireless IoT is conducted.

Wireless Communications for Collaborative Federated Learning

A novel FL framework is introduced, called collaborative FL, which enables edge devices to implement FL with less reliance on a central controller, and a number of communication techniques are proposed so as to improve CFL performance.

Federated Learning for Internet of Things: A Comprehensive Survey

This article explores the potential of FL for enabling a wide range of IoT services, including IoT data sharing, data offloading and caching, attack detection, localization, mobile crowdsensing, and IoT privacy and security.

A Joint Decentralized Federated Learning and Communications Framework for Industrial Networks

A real-time framework for the analysis of decentralized FL systems running on top of industrial wireless networks rooted in the popular Time Slotted Channel Hopping (TSCH) radio interface of the IEEE 802.15.4e standard is proposed.

Decentralized Federated Learning via SGD over Wireless D2D Networks

  • Hong XingO. SimeoneS. Bi
  • Computer Science
    2020 IEEE 21st International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)
  • 2020
Wireless protocols are proposed that implement Decentralized Stochastic Gradient Descent by accounting for the presence of path loss, fading, blockages, and mutual interference in the deployment of Federated Learning.

Budgeted Online Selection of Candidate IoT Clients to Participate in Federated Learning

This work solves the problem of optimizing accuracy in stateful FL with a budgeted number of candidate clients by selecting the best candidate clients in terms of test accuracy to participate in the training process and proposed heuristic outperforms the online random algorithm with up to 27% gain in accuracy.

Federated vs. Centralized Machine Learning under Privacy-elastic Users: A Comparative Analysis

Interestingly enough, asymmetry in data availability across users as well as their varying number are shown to hardly affect the FL approach in traffic and energy needs, pointing both to its promising potential and the need for further research.

Autonomy and Intelligence in the Computing Continuum: Challenges, Enablers, and Future Directions for Orchestration

It is claimed that to support the constantly growing requirements of intelligent applications in the device-edge-cloud computing continuum, resource orchestration needs to embrace edge AI and emphasize local autonomy and intelligence.
...

References

SHOWING 1-10 OF 48 REFERENCES

Decentralized Federated Learning: A Segmented Gossip Approach

A segmented gossip approach is proposed, which not only makes full utilization of node-to-node bandwidth, but also has good training convergence, and the experimental results show that even the training time can be highly reduced as compared to centralized federated learning.

Adaptive Federated Learning in Resource Constrained Edge Computing Systems

This paper analyzes the convergence bound of distributed gradient descent from a theoretical point of view, and proposes a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget.

Federated Optimization: Distributed Machine Learning for On-Device Intelligence

We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number

BrainTorrent: A Peer-to-Peer Environment for Decentralized Federated Learning

The overall effectiveness of FL for the challenging task of whole brain segmentation is demonstrated and it is observed that the proposed server-less BrainTorrent approach does not only outperform the traditional server-based one but reaches a similar performance to a model trained on pooled data.

Federated Learning: Strategies for Improving Communication Efficiency

Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling.

Communication-Efficient On-Device Machine Learning: Federated Distillation and Augmentation under Non-IID Private Data

Federated distillation (FD) is proposed, a distributed model training algorithm whose communication payload size is much smaller than a benchmark scheme, federated learning (FL), particularly when the model size is large.

Federated Learning for Ultra-Reliable Low-Latency V2V Communications

It is shown that FL enables the proposed distributed method to estimate the tail distribution of queues with an accuracy that is very close to a centralized solution with up to 79% reductions in the amount of data that need to be exchanged.

Communication-Efficient Learning of Deep Networks from Decentralized Data

This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.

A Survey of Traffic Issues in Machine-to-Machine Communications Over LTE

The traffic issues of M2M communications and the challenges they impose on both access channel and traffic channel of a radio access network and the congestion problems they create in the CN are investigated.