Gradient Scheduling with Global Momentum for Non-IID Data Distributed Asynchronous Training
@article{Li2019GradientSW, title={Gradient Scheduling with Global Momentum for Non-IID Data Distributed Asynchronous Training}, author={Chengjie Li and Ruixuan Li and Pan Zhou and Haozhao Wang and Yuhua Li and Song Guo and Keqin Li}, journal={ArXiv}, year={2019}, volume={abs/1902.07848} }
Distributed asynchronous offline training has received widespread attention in recent years because of its high performance on large-scale data and complex models. As data are processed from cloud-centric positions to edge locations, a big challenge for distributed systems is how to handle native and natural non-independent and identically distributed (non-IID) data for training. Previous asynchronous training methods do not have a satisfying performance on non-IID data because it would result…
Figures and Tables from this paper
7 Citations
Local Gradient Aggregation for Decentralized Learning from Non-IID data
- Computer Science
- 2020
This work proposes a Local Gradient Aggregation (LGA) that is a decentralized learning algorithm, where each agent collects the gradient information from its neighboring agents and updates its model with a projected gradient, and demonstrates the efficacy of LGA on non-iid data distributions on benchmark datasets.
Towards Efficient and Stable K-Asynchronous Federated Learning with Unbounded Stale Gradients on Non-IID Data
- Computer ScienceIEEE Transactions on Parallel and Distributed Systems
- 2022
This paper proposes a two-stage weighted K asynchronous FL with adaptive learning rate (WKAFL), which utilizes stale gradients and mitigates the impact of non-IID data, which can achieve multifaceted enhancement in training speed, prediction accuracy and training stability.
Two-Dimensional Learning Rate Decay: Towards Accurate Federated Learning with Non-IID Data
- Computer Science2021 International Joint Conference on Neural Networks (IJCNN)
- 2021
Two-Dimensional Learning Rate Decay (2D-LRD) is proposed, which aims to improve the model performance by adaptively tuning the learning rate on two dimensions: round-dimension and iteration-dimension during the model training.
Semisupervised Distributed Learning With Non-IID Data for AIoT Service Platform
- Computer ScienceIEEE Internet of Things Journal
- 2020
An edge learning system based on semisupervised learning and federated learning technologies that can have up to 5.9% higher accuracy of object detection for the video analysis applications by fully utilizing unlabeled data, compared with the situation that only labeled data are used.
A Unified Federated Learning Framework for Wireless Communications: towards Privacy, Efficiency, and Security
- Computer ScienceIEEE INFOCOM 2020 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)
- 2020
A two-step federated learning framework, robust federated augmentation and distillation (RFA-RFD), to enable privacy-preserving, communication-efficient, and Byzantine-tolerant on-device machine learning in wireless communications is proposed.
Aggregation Delayed Federated Learning
- Computer ScienceArXiv
- 2021
This work proposes a new aggregation framework for federated learning by introducing redistribution rounds that delay the aggregation and shows that the proposed framework significantly improves the performance on non-IID data.
Cross-Gradient Aggregation for Decentralized Learning from Non-IID data
- Computer ScienceICML
- 2021
This work proposes Cross-Gradient Aggregation (CGA), a novel decentralized learning algorithm where each agent aggregates cross-gradient information and updates its model using a projected gradient based on quadratic programming (QP), and theoretically analyze the convergence characteristics of CGA.
References
SHOWING 1-10 OF 42 REFERENCES
Asynchronous Distributed Semi-Stochastic Gradient Optimization
- Computer ScienceAAAI
- 2016
This paper proposes a fast distributed asynchronous SGD-based algorithm with variance reduction that outperforms state-of-the-art distributed asynchronous algorithms in terms of both wall clock time and solution quality.
Petuum: A Framework for Iterative-Convergent Distributed ML
- Computer ScienceArXiv
- 2013
This architecture specifically exploits the fact that many ML programs are fundamentally loss function minimization problems, and that their iterative-convergent nature presents many unique opportunities to minimize loss, such as via dynamic variable scheduling and error-bounded consistency models for synchronization.
Federated Optimization: Distributed Optimization Beyond the Datacenter
- Computer ScienceArXiv
- 2015
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are distributed (unevenly) over an extremely large…
Asynchronous Stochastic Gradient Descent with Delay Compensation
- Computer ScienceICML
- 2017
The proposed algorithm is evaluated on CIFAR-10 and ImageNet datasets, and the experimental results demonstrate that DC-ASGD outperforms both synchronous SGD and asynchronous SGD, and nearly approaches the performance of sequential SGD.
Deep learning with Elastic Averaging SGD
- Computer ScienceNIPS
- 2015
Experiments demonstrate that the new algorithm accelerates the training of deep architectures compared to DOWNPOUR and other common baseline approaches and furthermore is very communication efficient.
Federated Optimization: Distributed Machine Learning for On-Device Intelligence
- Computer ScienceArXiv
- 2016
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number…
Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
- Computer ScienceICLR
- 2018
This paper finds 99.9% of the gradient exchange in distributed SGD is redundant, and proposes Deep Gradient Compression (DGC) to greatly reduce the communication bandwidth, which enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributedTraining on mobile.
Slow and Stale Gradients Can Win the Race
- Computer ScienceIEEE Journal on Selected Areas in Information Theory
- 2021
This work presents a novel theoretical characterization of the speed-up offered by asynchronous SGD methods by analyzing the trade-off between the error in the trained model and the actual training runtime (wallclock time).
Large Scale Distributed Deep Networks
- Computer ScienceNIPS
- 2012
This paper considers the problem of training a deep network with billions of parameters using tens of thousands of CPU cores and develops two algorithms for large-scale distributed training, Downpour SGD and Sandblaster L-BFGS, which increase the scale and speed of deep network training.
Scalable distributed DNN training using commodity GPU cloud computing
- Computer ScienceINTERSPEECH
- 2015
It is shown empirically that the method can reduce the amount of communication by three orders of magnitude while training a typical DNN for acoustic modelling, and enables efficient scaling to more parallel GPU nodes than any other method that is aware of.