Mnemonics Training: Multi-Class Incremental Learning Without Forgetting

@article{Liu2020MnemonicsTM,
  title={Mnemonics Training: Multi-Class Incremental Learning Without Forgetting},
  author={Yaoyao Liu and Anan Liu and Yuting Su and B. Schiele and Qianru Sun},
  journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2020},
  pages={12242-12251}
}
  • Yaoyao Liu, Anan Liu, +2 authors Qianru Sun
  • Published 2020
  • Computer Science, Mathematics
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Multi-Class Incremental Learning (MCIL) aims to learn new concepts by incrementally updating a model trained on previous concepts. However, there is an inherent trade-off to effectively learning new concepts without catastrophic forgetting of previous ones. To alleviate this issue, it has been proposed to keep around a few examples of the previous concepts but the effectiveness of this approach heavily depends on the representativeness of these examples. This paper proposes a novel and… Expand

Figures and Tables from this paper

Essentials for Class Incremental Learning
TLDR
This work sheds light on the causes of catastrophic forgetting in classIL and shows that a combination of simple components and a loss that balances intra-task and intertask learning can already resolve forgetting to the same extent as more complex measures proposed in literature. Expand
Few-Shot Incremental Learning with Continually Evolved Classifiers
TLDR
This paper adopted a simple but effective decoupled learning strategy of representations and classifiers that only the classifiers are updated in each incremental session, which avoids knowledge forgetting in the representations and proposes a Continually Evolved Classifier (CEC) that employs a graph model to propagate context information between classifiers for adaptation. Expand
ZS-IL: Looking Back on Learned Experiences For Zero-Shot Incremental Learning
TLDR
This paper proposed a Zero-Shot Incremental Learning not only to replay past experiences the model has learned but also to perform this in a zero-shot manner, and introduced a memory recovery paradigm in which the network is queried to synthesize past exemplars whenever a new task (class) emerges. Expand
GDumb: A Simple Approach that Questions Our Progress in Continual Learning
We discuss a general formulation for the Continual Learning (CL) problem for classification—a learning task where a stream provides samples to a learner and the goal of the learner, depending on theExpand
Learnable Expansion-and-Compression Network for Few-shot Class-Incremental Learning
TLDR
A learnable expansionand-compression network (LEC-Net), with the aim to simultaneously solve catastrophic forgetting and model overfitting problems in a unified framework, and demonstrates the potential to be a general incremental learning approach with dynamic model expansion capability. Expand
Continual Learning From Unlabeled Data Via Deep Clustering
  • Jiangpeng He
  • 2021
Continual learning, a promising future learning strategy, aims to learn new tasks incrementally using less computation and memory resources instead of retraining the model from scratch whenever newExpand
Does Continual Learning = Catastrophic Forgetting?
TLDR
A novel yet simple algorithm, YASS is introduced that outperforms state-of-the-art methods in the class-incremental categorization learning task and DyRT, a novel tool for tracking the dynamics of representation learning in continual models is presented. Expand
A Comprehensive Study of Class Incremental Learning Algorithms for Visual Tasks
TLDR
This work defines six desirable properties of incremental learning algorithms and analyzes them according to these properties, introduces a unified formalization of the class-incremental learning problem and proposes a common evaluation framework more thorough than existing ones. Expand
Balanced Softmax Cross-Entropy for Incremental Learning
TLDR
This work proposes the use of the Balanced Softmax Cross-Entropy loss and shows that it can be combined with exiting methods for incremental learning to improve their performances while also decreasing the computational cost of the training procedure in some cases. Expand
Class-incremental learning: survey and performance evaluation
TLDR
This paper provides a complete survey of existing methods for incremental learning, and in particular an extensive experimental evaluation on twelve class-incremental methods, including a comparison of class-increasing methods on multiple large-scale datasets, investigation into small and large domain shifts, and comparison on various network architectures. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 41 REFERENCES
End-to-End Incremental Learning
TLDR
This work proposes an approach to learn deep neural networks incrementally, using new data and only a small exemplar set corresponding to samples from the old classes, based on a loss composed of a distillation measure to retain the knowledge acquired from theold classes, and a cross-entropy loss to learn the new classes. Expand
Learning a Unified Classifier Incrementally via Rebalancing
TLDR
This work develops a new framework for incrementally learning a unified classifier, e.g. a classifier that treats both old and new classes uniformly, and incorporates three components, cosine normalization, less-forget constraint, and inter-class separation, to mitigate the adverse effects of the imbalance. Expand
A Strategy for an Uncompromising Incremental Learner
TLDR
This article designs a strategy involving generative models and the distillation of dark knowledge as a means of hallucinating data along with appropriate targets from past distributions, and shows that phantom sampling helps avoid catastrophic forgetting during incremental learning. Expand
Large Scale Incremental Learning
  • Yue Wu, Yinpeng Chen, +4 authors Y. Fu
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
This work found that the last fully connected layer has a strong bias towards the new classes, and this bias can be corrected by a linear model, and with two bias parameters, this method performs remarkably well on two large datasets. Expand
Meta-Transfer Learning through Hard Tasks
TLDR
This work proposes a novel approach called meta-transfer learning (MTL), which learns to transfer the weights of a deep NN for few-shot learning tasks, and introduces the hard task (HT) meta-batch scheme as an effective learning curriculum of few- shot classification tasks. Expand
From N to N+1: Multiclass Transfer Incremental Learning
TLDR
This paper presents a discriminative method based on a Least-Squares Support Vector Machine formulation that addresses the issue of transferring to the new class and preserving what has already been learned on the source models when learning a new class. Expand
Meta-Transfer Learning for Few-Shot Learning
TLDR
A novel few-shot learning method called meta-transfer learning (MTL) which learns to adapt a deep NN for few shot learning tasks and introduces the hard task (HT) meta-batch scheme as an effective learning curriculum for MTL. Expand
Measuring Catastrophic Forgetting in Neural Networks
TLDR
New metrics and benchmarks for directly comparing five different mechanisms designed to mitigate catastrophic forgetting in neural networks: regularization, ensembling, rehearsal, dual-memory, and sparse-coding are introduced. Expand
Overcoming Catastrophic Forgetting for Continual Learning via Model Adaptation
TLDR
This paper proposes a very different approach, called Parameter Generation and Model Adaptation (PGMA), to dealing with the problem of catastrophic forgetting in standard neural network architectures. Expand
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learningExpand
...
1
2
3
4
5
...