Self-organizing Democratized Learning: Towards Large-scale Distributed Learning Systems

@article{Nguyen2020SelforganizingDL,
  title={Self-organizing Democratized Learning: Towards Large-scale Distributed Learning Systems},
  author={Minh N. H. Nguyen and Shashi Raj Pandey and Tri Nguyen Dang and Eui-nam Huh and Choong Seon Hong and Nguyen H. Tran and Walid Saad},
  journal={IEEE transactions on neural networks and learning systems},
  year={2020},
  volume={PP}
}
Emerging cross-device artificial intelligence (AI) applications require a transition from conventional centralized learning systems toward large-scale distributed AI systems that can collaboratively perform complex learning tasks. In this regard, democratized learning (Dem-AI) lays out a holistic philosophy with underlying principles for building large-scale distributed and democratized machine learning systems. The outlined principles are meant to study a generalization in distributed learning… 

Towards Effective Clustered Federated Learning: A Peer-to-peer Framework with Adaptive Neighbor Matching

Theoretical analysis and empirical experiments show that the proposed algorithm, PANM, is superior to the P2P FL counterparts, and it achieves better performance than the centralized cluster FL method.

Edge-Assisted Democratized Learning Toward Federated Analytics

The hierarchical learning structure of the proposed edge-assisted Dem-AI mechanism, namely Edge-DemLearn, is shown as a practical framework to empower generalization capability in support of FA and validated as a flexible model training mechanism to build a distributed control and aggregation methodology in regions by leveraging the distributed computing infrastructure.

Asynchronous Hierarchical Federated Learning

  • Xing WangYijun Wang
  • Computer Science
    ArXiv
  • 2022
The proposed asynchronous hierarchical federated learning schema is used to tolerate heterogeneity of the system and achieve fast convergence, and regularized stochastic gradient descent is performed in workers, so that the instability of asynchronous learning can be alleviated.

Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized Floating Aggregation Point

Network-aware CE-FL is formulated which aims to adaptively optimize all the network elements via tuning their contribution to the learning process, which turns out to be a non-convex mixed integer problem.

A Contribution-Based Device Selection Scheme in Federated Learning

This strategy combines exploration of data freshness through a random device selection with exploitation through simplified estimates of device contributions to improve the performance of the trained model both in terms of generalization and personalization.

A Data-Driven Democratized Control Architecture for Regional Transmission Operators

A datadriven democratized control architecture considering two democratization pathways to assist transmission system operators, with a targeted use case of developing online proactive islanding strategies is proposed.

Seven Defining Features of Terahertz (THz) Wireless Systems: A Fellowship of Communication and Sensing

This paper panoramically examines the steps needed to efficiently and reliably deploy and operate next-generation THz wireless systems that will synergistically support a fellowship of communication and sensing services and presents the key THz 6G use cases along with their associated major challenges and open problems.

Implicit model specialization through dag-based decentralized federated learning

This work proposes a unified approach to decentralization and personalization in federated learning that is based on a directed acyclic graph (DAG) of model updates that enables the evolution of specialized models, which focus on a subset of the data and therefore cover non-IID data better than Federated learning in a centralized or blockchain-based setup.

Edge Intelligence in 6G Systems

This chapter provides a vision for edge intelligence as a key building block of 6G wireless systems, and identifies the need to provide a sustainable and proactive network design and optimization, by enabling an explainable edge intelligence backed by data-science and theory-based models.

Environment-Adaptive Multiple Access for Distributed V2X Network: A Reinforcement Learning Framework

This paper proposes a resource allocation mechanism adaptive to the environment, which can be an efficient solution for air interface congestion that a V2X network often suffers from and aims at granting a higher chance of transmission to a vehicle with a higher crash risk.

References

SHOWING 1-10 OF 31 REFERENCES

Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms

Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits.

Advances and Open Problems in Federated Learning

Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.

Distributed and Democratized Learning: Philosophy and Research Challenges

A reference design is presented as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields and four underlying mechanisms in the design such as plasticity-stability transition mechanism, self-organizing hierarchical structuring, specialized learning, and generalization are introduced.

Federated Optimization in Heterogeneous Networks

This work introduces a framework, FedProx, to tackle heterogeneity in federated networks, and provides convergence guarantees for this framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work.

LEAF: A Benchmark for Federated Settings

LEAF is proposed, a modular benchmarking framework for learning in federated settings that includes a suite of open-source federated datasets, a rigorous evaluation framework, and a set of reference implementations, all geared towards capturing the obstacles and intricacies of practical federated environments.

Toward Multiple Federated Learning Services Resource Sharing in Mobile Edge Networks

A centralized algorithm based on the block coordinate descent method and a decentralized JP-miADMM algorithm for solving the joint resource optimization and hyper-learning rate control problem regarding the energy consumption of mobile devices and overall learning time is designed.

Personalized Federated Learning with Moreau Envelopes

This work proposes an algorithm for personalized FL (pFedMe) using Moreau envelopes as clients' regularized loss functions, which help decouple personalized model optimization from the global model learning in a bi-level problem stylized for personalizedFL.

Adaptive Personalized Federated Learning

Information theoretically, it is proved that the mixture of local and global models can reduce the generalization error and a communication-reduced bilevel optimization method is proposed, which reduces the communication rounds to $O(\sqrt{T})$ and can achieve a convergence rate of $O(1/T)$ with some residual error.

Personalized Federated Learning: A Meta-Learning Approach

A personalized variant of the well-known Federated Averaging algorithm is studied and its performance is characterized by the closeness of underlying distributions of user data, measured in terms of distribution distances such as Total Variation and 1-Wasserstein metric.

Incentivize to Build: A Crowdsourcing Framework for Federated Learning

This work forms a utility maximization problem to tackle the difficulty of maintaining communication efficiency when participating clients implement uncoordinated computation strategy during aggregation of model parameters, and proposes a novel crowdsourcing framework involving a number of participating clients with local training data to leverage FL.