FedDUAP: Federated Learning with Dynamic Update and Adaptive Pruning Using Shared Data on the Server
@inproceedings{Zhang2022FedDUAPFL, title={FedDUAP: Federated Learning with Dynamic Update and Adaptive Pruning Using Shared Data on the Server}, author={Hong Zhang and Ji Liu and Juncheng Jia and Yang Zhou and Huaiyu Dai and Dejing Dou}, booktitle={International Joint Conference on Artificial Intelligence}, year={2022} }
Despite achieving remarkable performance, Federated Learning (FL) suffers from two critical challenges, i.e., limited computational resources and low training efficiency. In this paper, we propose a novel FL framework, i.e., FedDUAP, with two original contributions, to exploit the insensitive data on the server and the decentralized data in edge devices to further improve the training efficiency. First, a dynamic server update algorithm is designed to exploit the insensitive data on the server…
4 Citations
Accelerated Federated Learning with Decoupled Adaptive Optimization
- Computer ScienceICML
- 2022
A momentum decoupling adaptive optimization method is developed to fully utilize the global momentum on each local iteration and accelerate the training convergence and overcome the possible inconsistency caused by adaptive optimization methods.
FedHiSyn: A Hierarchical Synchronous Federated Learning Framework for Resource and Data Heterogeneity
- Computer ScienceICPP
- 2022
Experimental results show that FedHiSyn outperforms six baseline methods, e.g., FedAvg, SCAFFOLD, and FedAT, in terms of training accuracy and efficiency.
Multi-Job Intelligent Scheduling With Cross-Device Federated Learning
- Computer ScienceIEEE Transactions on Parallel and Distributed Systems
- 2023
A novel intelligent scheduling approach based on multiple scheduling methods, including an original reinforcement learning- based scheduling method and an original Bayesian optimization-based scheduling method, which corresponds to a small cost while scheduling devices to multiple jobs.
Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources
- Computer ScienceConcurrency and Computation: Practice and Experience
- 2022
An Elastic Deep Learning framework for knowledge Distillation, i.e., EDL-Dist is proposed, where the inference and the training process is separated and fault-tolerance of the training and inference processes is sup-ported.
References
SHOWING 1-10 OF 30 REFERENCES
Model Pruning Enables Efficient Federated Learning on Edge Devices
- Computer ScienceIEEE transactions on neural networks and learning systems
- 2022
PruneFL is proposed--a novel FL approach with adaptive and distributed parameter pruning, which adapts the model size during FL to reduce both communication and computation overhead and minimize the overall training time, while maintaining a similar accuracy as the original model.
Game of Gradients: Mitigating Irrelevant Clients in Federated Learning
- Computer ScienceAAAI
- 2021
This paper resolves important and related FRCS problems, proposes a cooperative game involving the gradients shared by the clients and presents Shapley value based Federated Averaging (S-FedAvg) algorithm that empowers the server to select relevant clients with high probability.
Oort: Efficient Federated Learning via Guided Participant Selection
- Computer ScienceOSDI
- 2021
Oort improves time-to-accuracy performance in model training, and prioritizes the use of those clients who have both data that offers the greatest utility in improving model accuracy and the capability to run training quickly, to enable FL developers to interpret their results in model testing.
Practical One-Shot Federated Learning for Cross-Silo Setting
- Computer ScienceIJCAI
- 2021
This paper proposes a practical one-shot federated learning algorithm named FedKT that can be applied to any classification models and can flexibly achieve differential privacy guarantees and can significantly outperform the other state-of-the-art Federated learning algorithms with a single communication round.
Ensemble Distillation for Robust Model Fusion in Federated Learning
- Computer ScienceNeurIPS
- 2020
This work proposes ensemble distillation for model fusion, i.e. training the central classifier through unlabeled data on the outputs of the models from the clients, which allows flexible aggregation over heterogeneous client models that can differ e.g. in size, numerical precision or structure.
Federated Learning with Non-IID Data
- Computer ScienceArXiv
- 2018
This work presents a strategy to improve training on non-IID data by creating a small subset of data which is globally shared between all the edge devices, and shows that accuracy can be increased by 30% for the CIFAR-10 dataset with only 5% globally shared data.
From distributed machine learning to federated learning: a survey
- Computer ScienceKnowledge and Information Systems
- 2022
This paper proposes a functional architecture of federated learning systems and a taxonomy of related techniques and presents four widely used federated systems based on the functional architecture.
Efficient Device Scheduling with Multi-Job Federated Learning
- Computer ScienceAAAI
- 2022
This paper proposes a novel multi-job FL framework to enable the parallel training process of multiple jobs, and proposes a reinforcement learning- based method and a Bayesian optimization-based method to schedule devices for multiple jobs while minimizing the cost.
Communication-Efficient On-Device Machine Learning: Federated Distillation and Augmentation under Non-IID Private Data
- Computer ScienceArXiv
- 2018
Federated distillation (FD) is proposed, a distributed model training algorithm whose communication payload size is much smaller than a benchmark scheme, federated learning (FL), particularly when the model size is large.
Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization
- Computer ScienceNeurIPS
- 2020
This paper provides the first principled understanding of the solution bias and the convergence slowdown due to objective inconsistency and proposes FedNova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.