Asynchronous Federated Optimization
@article{Xie2019AsynchronousFO, title={Asynchronous Federated Optimization}, author={Cong Xie and Oluwasanmi Koyejo and Indranil Gupta}, journal={ArXiv}, year={2019}, volume={abs/1903.03934} }
Federated learning enables training on a massive number of edge devices. [] Key Result Empirical results show that the proposed algorithm converges fast and tolerates staleness.
231 Citations
Joint Topology and Computation Resource Optimization for Federated Edge Learning
- Computer Science2021 IEEE Globecom Workshops (GC Wkshps)
- 2021
A novel penalty-based successive convex approximation method is proposed to solve the mixed-integer nonlinear problem, which converges to a stationary point of the primal problem under mild conditions.
Unbounded Gradients in Federated Learning with Buffered Asynchronous Aggregation
- Computer Science2022 58th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
- 2022
A theoretical analysis of the convergence rate of the FedBuff algorithm for asynchronous federated learning is presented when heterogeneity in data, batch size, and delay are considered.
A Novel Framework for the Analysis and Design of Heterogeneous Federated Learning
- Computer ScienceIEEE Transactions on Signal Processing
- 2021
This paper provides a general framework to analyze the convergence of federated optimization algorithms with heterogeneous local training progress at clients and proposes FedNova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.
Accelerating Federated Edge Learning via Topology Optimization
- Computer ScienceIEEE Internet of Things Journal
- 2023
A novel topology-optimized FEEL (TOFEL) scheme is proposed to tackle the heterogeneity issue in federated learning and to improve the communication-and-computation efficiency, and an efficient imitation-learning-based approach is seamlessly integrated into the TOFEL framework.
FedHe: Heterogeneous Models and Communication-Efficient Federated Learning
- Computer Science2021 17th International Conference on Mobility, Sensing and Networking (MSN)
- 2021
This paper proposes a novel FL method, called FedHe, inspired by knowledge distillation, which can train heterogeneous models and support asynchronous training processes with significantly reduced communication overheads and model accuracy.
HADFL: Heterogeneity-aware Decentralized Federated Learning Framework
- Computer Science2021 58th ACM/IEEE Design Automation Conference (DAC)
- 2021
Compared with the traditional FL system, HADFL can relieve the central server’s communication pressure, efficiently utilize heterogeneous computing power, and can achieve a maximum speedup of 3.15x than decentralized-FedAvg and 4.68x than Pytorch distributed training scheme, respectively, with almost no loss of convergence accuracy.
Time-Triggered Federated Learning Over Wireless Networks
- Computer ScienceIEEE Transactions on Wireless Communications
- 2022
This paper presents a time-triggered FL algorithm (TT-Fed) over wireless networks, which is a generalized form of classic synchronous and asynchronous FL, and provides a thorough convergence analysis for TT-Fed.
Asynchronous Semi-Decentralized Federated Edge Learning for Heterogeneous Clients
- Computer ScienceICC 2022 - IEEE International Conference on Communications
- 2022
This work investigates a novel semi-decentralized FEEL architecture where multiple edge servers collaborate to incorporate more data from edge devices in training, and proposes an asynchronous training algorithm to overcome the device heterogeneity in computational resources.
Efficient Federated Learning Algorithm for Resource Allocation in Wireless IoT Networks
- Computer ScienceIEEE Internet of Things Journal
- 2021
A convergence upper bound is provided characterizing the tradeoff between convergence rate and global rounds, showing that a small number of active UEs per round still guarantees convergence and advocating the proposed FL algorithm for a paradigm shift in bandwidth-constrained learning wireless IoT networks.
Semi-Synchronous Federated Learning for Energy-Efficient Training and Accelerated Convergence in Cross-Silo Settings
- Computer ScienceACM Trans. Intell. Syst. Technol.
- 2022
A novel energy-efficient Semi-Synchronous Federated Learning protocol that mixes local models periodically with minimal idle time and fast convergence is introduced that significantly outperforms previous work in data and computationally heterogeneous environments.
References
SHOWING 1-10 OF 25 REFERENCES
Federated Optimization: Distributed Optimization Beyond the Datacenter
- Computer ScienceArXiv
- 2015
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are distributed (unevenly) over an extremely large…
Towards Federated Learning at Scale: System Design
- Computer ScienceMLSys
- 2019
A scalable production system for Federated Learning in the domain of mobile devices, based on TensorFlow is built, describing the resulting high-level design, and sketch some of the challenges and their solutions.
Federated Learning: Strategies for Improving Communication Efficiency
- Computer ScienceArXiv
- 2016
Two ways to reduce the uplink communication costs are proposed: structured updates, where the user directly learns an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, which learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling.
Communication Efficient Distributed Machine Learning with the Parameter Server
- Computer ScienceNIPS
- 2014
An in-depth analysis of two large scale machine learning problems ranging from l1 -regularized logistic regression on CPUs to reconstruction ICA on GPUs, using 636TB of real data with hundreds of billions of samples and dimensions is presented.
Big data caching for networking: moving from cloud to edge
- Computer ScienceIEEE Communications Magazine
- 2016
In order to cope with the relentless data tsunami in 5G wireless networks, current approaches such as acquiring new spectrum, deploying more BSs, and increasing nodes in mobile packet core networks…
Communication-Efficient Learning of Deep Networks from Decentralized Data
- Computer ScienceAISTATS
- 2017
This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.
Asynchronous Decentralized Parallel Stochastic Gradient Descent
- Computer ScienceICML
- 2018
This paper proposes an asynchronous decentralized stochastic gradient decent algorithm (AD-PSGD) satisfying all above expectations and is the first asynchronous algorithm that achieves a similar epoch-wise convergence rate as AllReduce-SGD, at an over 100-GPU scale.
Practical Secure Aggregation for Privacy-Preserving Machine Learning
- Computer ScienceIACR Cryptol. ePrint Arch.
- 2017
This protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner, and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network.
MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems
- Computer ScienceArXiv
- 2015
The API design and the system implementation of MXNet are described, and it is explained how embedding of both symbolic expression and tensor operation is handled in a unified fashion.
Asynchronous Stochastic Gradient Descent with Delay Compensation
- Computer ScienceICML
- 2017
The proposed algorithm is evaluated on CIFAR-10 and ImageNet datasets, and the experimental results demonstrate that DC-ASGD outperforms both synchronous SGD and asynchronous SGD, and nearly approaches the performance of sequential SGD.