• Corpus ID: 235262508

Oort: Efficient Federated Learning via Guided Participant Selection

@inproceedings{Lai2021OortEF,
  title={Oort: Efficient Federated Learning via Guided Participant Selection},
  author={Fan Lai and Xiangfeng Zhu and Harsha V. Madhyastha and Mosharaf Chowdhury},
  booktitle={OSDI},
  year={2021}
}
Federated Learning (FL) is an emerging direction in distributed machine learning (ML) that enables in-situ model training and testing on edge data. Despite having the same end goals as traditional ML, FL executions differ significantly in scale, spanning thousands to millions of participating devices. As a result, data characteristics and device capabilities vary widely across clients. Yet, existing efforts randomly select FL participants, which leads to poor model and system efficiency. In this… 

FedBalancer: data and pace control for efficient federated learning on heterogeneous clients

TLDR
This work proposes FedBalancer, a systematic FL framework that actively selects clients' training samples while respecting privacy and computational capabilities of clients, and introduces an adaptive deadline control scheme that predicts the optimal deadline for each round with varying client training data.

Towards Energy-Aware Federated Learning on Battery-Powered Clients

TLDR
EAFL is a power-aware FL selection method that cherry-picks clients with higher battery levels in conjunction with its ability to maximize the system efficiency and jointly minimizes the time-to-accuracy and maximizes the remaining on-device battery levels.

Federated Analytics Informed Distributed Industrial IoT Learning with Non-IID Data

TLDR
This paper proposes a Federated skewness Analytics and Client Selection mechanism (FedACS) to quantify the data Skewness in a privacy preserving way and use this information to help downstream fed- erated learning tasks.

FedorAS: Federated Architecture Search under system heterogeneity

TLDR
The FedorAS system is designed to discover and train promising architectures when dealing with devices of varying capabilities holding non-IID distributed data, and shows its better performance compared to state-of-the-art federated solutions, while maintaining resource efficiency.

Towards Fair Federated Recommendation Learning: Characterizing the Inter-Dependence of System and Data Heterogeneity

TLDR
A data-driven approach is taken to show the inter-dependence of data and system heterogeneity in real-world data and quantifies its impact on the overall model quality and fairness and shows that modeling realistic system-induced data heterogeneity is essential to achieving fair federated recommendation learning.

FLAME: Federated Learning Across Multi-device Environments

TLDR
This paper proposes FLAME, a user-centered FL training approach to counter statistical and system heterogeneity in MDEs, and bring consistency in inference performance across devices.

Birds of a Feather Help: Context-aware Client Selection for Federated Learning

TLDR
This paper proposes a novel Neural Contextual Combinatorial Bandit approach, NCCB, that gracefully handles the non-trivial relationship between the extracted features and rewards, and satisfies the combinatorial constraints imposed by the federated learning.

A Multi-agent Reinforcement Learning Approach for Efficient Client Selection in Federated Learning

TLDR
Inspired by the recent success of Multi Agent Reinforcement Learning (MARL) in solving complex control problems, FedMarl is presented, a federated learning framework that relies on trained MARL agents to perform efficient run-time client selection.

Learning Advanced Client Selection Strategy for Federated Learning

TLDR
Inspired by the recent success of Multi-Agent Reinforce- ment Learning (MARL) in solving complex control problems, FedMarl is presented, an MARL-based FL framework which performs efficient run-time client selection and can improve model accuracy with much lower processing latency and communication cost.

System Optimization in Synchronous Federated Training: A Survey

TLDR
This paper surveys highly relevant attempts in the FL literature and organizes them by the related training phases in the standard workflow: selection, configuration, and reporting, and reviews exploratory work including measurement studies and benchmarking tools to friendly support FL developers.

References

SHOWING 1-10 OF 76 REFERENCES

Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition

TLDR
An audio dataset of spoken words designed to help train and evaluate keyword spotting systems and suggests a methodology for reproducible and comparable accuracy metrics for this task.

ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices

TLDR
An extremely computation-efficient CNN architecture named ShuffleNet is introduced, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs), to greatly reduce computation cost while maintaining accuracy.

Towards Federated Learning at Scale: System Design

TLDR
A scalable production system for Federated Learning in the domain of mobile devices, based on TensorFlow is built, describing the resulting high-level design, and sketch some of the challenges and their solutions.

Agnostic Federated Learning

TLDR
This work proposes a new framework of agnostic federated learning, where the centralized model is optimized for any target distribution formed by a mixture of the client distributions, and shows that this framework naturally yields a notion of fairness.

Adaptive Federated Optimization

TLDR
This work proposes federated versions of adaptive optimizers, including Adagrad, Adam, and Yogi, and analyzes their convergence in the presence of heterogeneous data for general nonconvex settings to highlight the interplay between client heterogeneity and communication efficiency.

Generative Models for Effective ML on Private, Decentralized Datasets

TLDR
This paper demonstrates that generative models - trained using federated methods and with formal differential privacy guarantees - can be used effectively to debug many commonly occurring data issues even when the data cannot be directly inspected.

Federated Optimization in Heterogeneous Networks

TLDR
This work introduces a framework, FedProx, to tackle heterogeneity in federated networks, and provides convergence guarantees for this framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work.

APPLIED FEDERATED LEARNING: IMPROVING GOOGLE KEYBOARD QUERY SUGGESTIONS

TLDR
This paper uses federated learning in a commercial, global-scale setting to train, evaluate and deploy a model to improve virtual keyboard search suggestion quality without direct access to the underlying user data.

FedScale: Benchmarking Model and System Performance of Federated Learning

TLDR
FedScale is a federated learning benchmarking suite with realistic datasets and a scalable runtime to enable reproducible FL research and highlight potential opportunities for heterogeneity-aware co-optimizations in FL.
...