Cost-Effective Federated Learning Design

@article{Luo2021CostEffectiveFL,
  title={Cost-Effective Federated Learning Design},
  author={Bing Luo and Xiang Li and Shiqiang Wang and Jianwei Huang and Leandros Tassiulas},
  journal={IEEE INFOCOM 2021 - IEEE Conference on Computer Communications},
  year={2021},
  pages={1-10}
}
  • B. Luo, Xiang Li, +2 authors L. Tassiulas
  • Published 15 December 2020
  • Computer Science, Mathematics
  • IEEE INFOCOM 2021 - IEEE Conference on Computer Communications
Federated learning (FL) is a distributed learning paradigm that enables a large number of devices to collaboratively learn a model without sharing their raw data. Despite its practical efficiency and effectiveness, the iterative on-device learning process incurs a considerable cost in terms of learning time and energy consumption, which depends crucially on the number of selected clients and the number of local iterations in each training round. In this paper, we analyze how to design adaptive… Expand

Figures and Tables from this paper

Cost-Effective Federated Learning in Mobile Edge Networks
TLDR
This paper analyzes how to design adaptive FL in mobile edge networks that optimally chooses these essential control variables to minimize the total cost while ensuring convergence, and develops a low-cost sampling-based algorithm to learn the convergence related unknown parameters. Expand
Budget-Aware Online Control of Edge Federated Learning on Streaming Data With Stochastic Inputs
  • Yibo Jin, Lei Jiao, Zhuzhong Qian, Sheng Zhang, Sanglu Lu
  • IEEE Journal on Selected Areas in Communications
  • 2021
Performing federated learning continuously in edge networks while training data are dynamically and unpredictably streamed to the devices faces critical challenges, including the global modelExpand
Resource-constrained Federated Edge Learning with Heterogeneous Data: Formulation and Analysis
  • Yi Liu, Yuanshao Zhu, James J. Q. Yu
  • Computer Science, Engineering
  • IEEE Transactions on Network Science and Engineering
  • 2021
TLDR
A distributed approximate Newton-type algorithm with fast convergence speed to alleviate the problem of FEEL resource (in terms of communication resources) constraints and a simple but elegant training scheme, namely FedOVA, to solve the heterogeneous statistical challenge brought by heterogeneous data. Expand
Efficient Federated Meta-Learning over Multi-Access Wireless Networks
TLDR
This paper rigorously analyzes each device’s contribution to the global loss reduction in each round and develops an FML algorithm with a non-uniform device selection scheme to accelerate the convergence, and develops a resource allocation problem integrating NUFM in multi-access wireless systems to jointly improve the convergence rate and minimize the wallclock time. Expand
No Free Lunch: Balancing Learning and Exploitation at the Network Edge
TLDR
This work analyze the cost of learning in a resource-constrained system, defining an optimization problem in which training a DRL agent makes it possible to improve the resource allocation strategy but also reduces the number of available resources. Expand
On the Tradeoff between Energy, Precision, and Accuracy in Federated Quantized Neural Networks
TLDR
A quantized FL framework, that represents data with a finite level of precision in both local training and uplink transmission, is proposed, that can reduce energy consumption by up to 53% compared to a standard FL model. Expand
Optimizing the Numbers of Queries and Replies in Federated Learning with Differential Privacy
TLDR
This work studies a crucial question which has been vastly overlooked by existing works: what are the optimal numbers of queries and replies in FL with DP so that the final model accuracy is maximized, and investigates two most extensively used DP mechanisms. Expand
Enabling Long-Term Cooperation in Cross-Silo Federated Learning: A Repeated Game Perspective
TLDR
This paper model clients’ long-term selfish participation behaviors as an infinitely repeated game, with the stage game being a selfish participation game in one cross-silo FL process (SPFL), and derives the unique Nash equilibrium (NE), and proposes a distributed algorithm for each client to calculate its equilibrium participation strategy. Expand
Delay Analysis of Wireless Federated Learning Based on Saddle Point Approximation and Large Deviation Theory
Federated learning (FL) is a collaborative machine learning paradigm, which enables deep learning model training over a large volume of decentralized data residing in mobile devices without accessingExpand
Device or User: Rethinking Federated Learning in Personal-Scale Multi-Device Environments
TLDR
This paper introduces a new user-as-client (UAC) federation architecture, and proposes various device selection strategies to counter statistical and systems heterogeneity in FL-MDLN. Expand
...
1
2
...

References

SHOWING 1-10 OF 43 REFERENCES
Optimizing Federated Learning on Non-IID Data with Reinforcement Learning
TLDR
Favor, an experience-driven control framework that intelligently chooses the client devices to participate in each round of federated learning to counterbalance the bias introduced by non-IID data and to speed up convergence is proposed. Expand
Adaptive Gradient Sparsification for Efficient Federated Learning: An Online Learning Approach
TLDR
This paper presents a fairness-aware GS method which ensures that different clients provide a similar amount of updates, and proposes a novel online learning formulation and algorithm for automatically determining the near-optimal communication and computation trade-off that is controlled by the degree of gradient sparsity. Expand
Federated Optimization: Distributed Machine Learning for On-Device Intelligence
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large numberExpand
Federated Learning over Wireless Networks: Optimization Model Design and Analysis
TLDR
This work formulates a Federated Learning over wireless network as an optimization problem FEDL that captures both trade-offs and obtains the globally optimal solution by charactering the closed-form solutions to all sub-problems, which give qualitative insights to problem design via the obtained optimal FEDl learning time, accuracy level, and UE energy cost. Expand
Device Scheduling with Fast Convergence for Wireless Federated Learning
TLDR
A joint bandwidth allocation and scheduling problem is formulated to capture the long-term convergence performance of FL, and is solved by being decoupled into two sub-problems that outperforms other state-of-the-art scheduling policies. Expand
Robust and Communication-Efficient Federated Learning From Non-i.i.d. Data
TLDR
Sparse ternary compression (STC) is proposed, a new compression framework that is specifically designed to meet the requirements of the federated learning environment and advocate for a paradigm shift in federated optimization toward high-frequency low-bitwidth communication, in particular in the bandwidth-constrained learning environments. Expand
Federated Optimization in Heterogeneous Networks
TLDR
This work introduces a framework, FedProx, to tackle heterogeneity in federated networks, and provides convergence guarantees for this framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work. Expand
Accelerating Federated Learning via Momentum Gradient Descent
TLDR
This article considers momentum term which relates to the last iteration of FL, which establishes global convergence properties of MFL and derive an upper bound on MFL convergence rate, and provides conditions in which MFL accelerates the convergence. Expand
Resource-Efficient and Convergence-Preserving Online Participant Selection in Federated Learning
TLDR
This work designs an online learning algorithm to make fractional control decisions based on both previous system dynamics and previous training results, and also design an online randomized rounding algorithm to convert the fractional decisions into integers without violating any constraints. Expand
Federated Learning: Strategies for Improving Communication Efficiency
TLDR
Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling. Expand
...
1
2
3
4
5
...