Corpus ID: 221644574

Federated Continual Learning with Weighted Inter-client Transfer

@inproceedings{Yoon2021FederatedCL,
  title={Federated Continual Learning with Weighted Inter-client Transfer},
  author={Jaehong Yoon and Wonyoung Jeong and Giwoong Lee and Eunho Yang and Sung Ju Hwang},
  booktitle={ICML},
  year={2021}
}
There has been a surge of interest in continual learning and federated learning, both of which are important in deep neural networks in real-world scenarios. Yet little research has been done regarding the scenario where each client learns on a sequence of tasks from a private local data stream. This problem of federated continual learning poses new challenges to continual learning, such as utilizing knowledge from other clients, while preventing interference from irrelevant knowledge. To… Expand

Figures and Tables from this paper

INTER-CLIENT CONSISTENCY & DISJOINT LEARNING
While existing federated learning approaches mostly require that clients have fullylabeled data to train on, in realistic settings, data obtained at the client-side often comes without anyExpand
TornadoAggregate: Accurate and Scalable Federated Learning via the Ring-Based Architecture
TLDR
This work proposes a novel algorithm called TornadoAggregate that improves both accuracy and scalability by facilitating the ring architecture and establishes three principles to reduce variance: Ring-Aware Grouping, Small Ring, and Ring Chaining. Expand
Online Coreset Selection for Rehearsal-based Continual Learning
TLDR
This work proposes Online Coreset Selection (OCS), a simple yet effective method that selects the most representative and informative coreset at each iteration and trains them in an online manner, demonstrating that it improves task adaptation and prevents catastrophic forgetting in a sample-efficient manner. Expand
A distillation-based approach integrating continual learning and federated learning for pervasive services
Federated Learning, a new machine learning paradigm enhancing the use of edge devices, is receiving a lot of attention in the pervasive community to support the development of smart services.Expand
Adversarial training in communication constrained federated learning
TLDR
FedDynAT– a novel algorithm for performing adversarial training in federated setting significantly improves both natural and adversarial accuracy, as well as model convergence time by reducing the model drift. Expand
Concept drift detection and adaptation for federated and continual learning
TLDR
This work proposes a new method, called Concept-Drift-Aware Federated Averaging (CDA-FedAvg), an extension of the most popular federated algorithm, FedAvg, enhancing it for continual adaptation under concept drift, and demonstrates the weaknesses of regular FedAvg and proves that CDA-fedAvg outperforms it in this type of scenario. Expand
Federated Reconnaissance: Efficient, Distributed, Class-Incremental Learning
We describe federated reconnaissance, a class of learning problems in which distributed clients learn new concepts independently and communicate that knowledge efficiently. In particular, we proposeExpand
Local-Global Knowledge Distillation in Heterogeneous Federated Learning with Non-IID Data
  • Dezhong Yao, Wanning Pan, +5 authors Lichao Sun
  • Computer Science
  • 2021
Federated learning enables multiple clients to collaboratively learn a global model by periodically aggregating the clients’ models without transferring the local data. However, due to theExpand

References

SHOWING 1-10 OF 47 REFERENCES
Federated Continual Learning with Adaptive Parameter Communication
TLDR
This work proposes a novel federated continual learning framework, Federated continualLearning with Adaptive Parameter Communication, which additively decomposes the network weights into global shared parameters and sparse task-specific parameters and allows inter-client knowledge transfer by communicating the sparse Task Specific parameters. Expand
Communication-Efficient Federated Deep Learning with Asynchronous Model Update and Temporally Weighted Aggregation
TLDR
The results demonstrate that the proposed asynchronous federated deep learning outperforms the baseline algorithm both in terms of communication cost and model accuracy. Expand
SCAFFOLD: Stochastic Controlled Averaging for On-Device Federated Learning
TLDR
A new Stochastic Controlled Averaging algorithm (SCAFFOLD) which uses control variates to reduce the drift between different clients and it is proved that the algorithm requires significantly fewer rounds of communication and benefits from favorable convergence guarantees. Expand
Asynchronous Online Federated Learning for Edge Devices
TLDR
An Asynchronous Online Federated Learning (ASO- fed) framework, where the edge devices perform online learning with continuous streaming local data and a central server aggregates model parameters from local clients is presented. Expand
Federated Optimization in Heterogeneous Networks
TLDR
This work introduces a framework, FedProx, to tackle heterogeneity in federated networks, and provides convergence guarantees for this framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work. Expand
Overcoming Forgetting in Federated Learning on Non-IID Data
TLDR
This work adds a penalty term to the loss function, compelling all local models to converge to a shared optimum, and shows that this can be done efficiently for communication (adding no further privacy risks), scaling with the number of nodes in the distributed setting. Expand
Federated Learning with Matched Averaging
TLDR
This work proposes Federated matched averaging (FedMA) algorithm designed for federated learning of modern neural network architectures e.g. convolutional neural networks (CNNs) and LSTMs and indicates that FedMA outperforms popular state-of-the-art federatedLearning algorithms on deep CNN and L STM architectures trained on real world datasets, while improving the communication efficiency. Expand
Bayesian Nonparametric Federated Learning of Neural Networks
TLDR
A Bayesian nonparametric framework for federated learning with neural networks is developed that allows for a more expressive global network without additional supervision, data pooling and with as few as a single communication round. Expand
Learning to Learn without Forgetting By Maximizing Transfer and Minimizing Interference
TLDR
This work proposes a new conceptualization of the continual learning problem in terms of a temporally symmetric trade-off between transfer and interference that can be optimized by enforcing gradient alignment across examples, and introduces a new algorithm, Meta-Experience Replay, that directly exploits this view by combining experience replay with optimization based meta-learning. Expand
Lifelong Learning with Dynamically Expandable Networks
TLDR
The obtained network fine-tuned on all tasks obtained significantly better performance over the batch models, which shows that it can be used to estimate the optimal network structure even when all tasks are available in the first place. Expand
...
1
2
3
4
5
...