Continual Prune-and-Select: Class-incremental learning with specialized subnetworks

@article{Dekhovich2022ContinualPC,
  title={Continual Prune-and-Select: Class-incremental learning with specialized subnetworks},
  author={Aleksandr Dekhovich and David M. J. Tax and Marcel H. F. Sluiter and Miguel A. Bessa},
  journal={ArXiv},
  year={2022},
  volume={abs/2208.04952}
}
The human brain is capable of learning tasks sequentially mostly without forgetting. However, deep neural networks (DNNs) suffer from catastrophic forgetting when learning one task after another. We address this challenge considering a class-incremental learning scenario where the DNN sees test data without knowing the task from which this data originates. During training, Continual-Prune-and-Select (CP&S) finds a subnetwork within the DNN that is responsible for solving a given task. Then… 

Cooperative data-driven modeling

A continual learning method is developed that is applied to recurrent neural networks to predict history-dependent plasticity behavior and is applied here for the first time to solid mechanics.

References

SHOWING 1-10 OF 53 REFERENCES

Class Incremental Learning With Task-Selection

A novel knowledge distillation-based class incremental learning method with a task-selective autoencoder (TsAE) to reconstruct the feature map of each task, which achieves higher classification accuracy and less forgetting compared to the state of the art methods.

Overcoming catastrophic forgetting in neural networks

It is shown that it is possible to overcome the limitation of connectionist models and train networks that can maintain expertise on tasks that they have not experienced for a long time and selectively slowing down learning on the weights important for previous tasks.

iTAML: An Incremental Task-Agnostic Meta-learning Approach

A novel meta-learning approach that seeks to maintain an equilibrium between all the encountered tasks, ensured by a new meta-update rule which avoids catastrophic forgetting and is task-agnostic.

SpaceNet: Make Free Space For Continual Learning

A Continual Learning Survey: Defying Forgetting in Classification Tasks

This work focuses on task incremental classification, where tasks arrive sequentially and are delineated by clear boundaries, and develops a novel framework to continually determine the stability-plasticity trade-off of the continual learner.

Class-incremental Learning via Deep Model Consolidation

A class-incremental learning paradigm called Deep Model Consolidation (DMC), which works well even when the original training data is not available, and demonstrates significantly better performance in image classification and object detection in the single-headed IL setting.

An Empirical Investigation of Catastrophic Forgeting in Gradient-Based Neural Networks

It is found that it is always best to train using the dropout algorithm--the drop out algorithm is consistently best at adapting to the new task, remembering the old task, and has the best tradeoff curve between these two extremes.

Learning a Unified Classifier Incrementally via Rebalancing

This work develops a new framework for incrementally learning a unified classifier, e.g. a classifier that treats both old and new classes uniformly, and incorporates three components, cosine normalization, less-forget constraint, and inter-class separation, to mitigate the adverse effects of the imbalance.

Continual Learning via Neural Pruning

Continual Learning via Neural Pruning is introduced, a new method aimed at lifelong learning in fixed capacity models based on neuronal model sparsification, and the concept of graceful forgetting is formalized and incorporated.

Learning without Forgetting

This work proposes the Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities, and performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques.
...