• Corpus ID: 221377025

Initial Classifier Weights Replay for Memoryless Class Incremental Learning

@article{Belouadah2020InitialCW,
  title={Initial Classifier Weights Replay for Memoryless Class Incremental Learning},
  author={Eden Belouadah and Adrian Daniel Popescu and Ioannis Kanellos},
  journal={ArXiv},
  year={2020},
  volume={abs/2008.13710}
}
Incremental Learning (IL) is useful when artificial systems need to deal with streams of data and do not have access to all data at all times. The most challenging setting requires a constant complexity of the deep model and an incremental model update without access to a bounded memory of past data. Then, the representations of past classes are strongly affected by catastrophic forgetting. To mitigate its negative effect, an adapted fine tuning which includes knowledge distillation is usually… 

Figures and Tables from this paper

Dataset Knowledge Transfer for Class-Incremental Learning without Memory

TLDR
This work tackles class-incremental learning without memory by adapting prediction bias correction, a method which makes predictions of past and new classes more comparable, and introduces a two-step learning process which allows the transfer of bias correction parameters between reference and target datasets.

Class-Incremental Learning with Generative Classifiers

TLDR
This paper proposes a new strategy for class-incremental learning: generative classification, which is to learn the joint distribution p(x, y), factorized as p( x|y)p(y), and to perform classification using Bayes’ rule.

Self-distilled Knowledge Delegator for Exemplar-free Class Incremental Learning

TLDR
This paper introduces a so-called knowledge delegator, which is capable of transferring knowledge from the trained model to a randomly re-initialized new model by generating informative samples, and achieves comparable performance to some exemplar-based methods without accessing any exemplars.

Coarse-To-Fine Incremental Few-Shot Learning

TLDR
Knowe is proposed: to learn, normalize, and freeze a classifier’s weights from fine labels, once learning an embedding space contrastively from coarse labels, which outperforms all recent relevant CIL/FSCIL methods that are tailored to the new problem setting for the first time.

Continual Contrastive Self-supervised Learning for Image Classification

TLDR
This paper makes the first attempt to implement the continual contrastive self-supervised learning by proposing a rehearsal method, which keeps a few exemplars from the previous data, and builds an extra sample queue to assist the network to distinguish between previous and current data and prevent mutual interference while learning their own feature representation.

PlaStIL: Plastic and Stable Memory-Free Class-Incremental Learning

TLDR
This work proposes a method which has similar number of parameters but dis-tributes them differently in order to find a better balance between plasticity and stability, and incorporates it to any transfer-based method designed for memory-free incremental learning.

Recent Advances of Continual Learning in Computer Vision: An Overview

TLDR
This paper presents a comprehensive review of the recent progress of continual learning in computer vision, grouped by their representative techniques, including regularization, knowledge distillation, memory, generative replay, parameter isolation, and a combination of the above techniques.

SIL-LAND: Segmentation Incremental Learning in Aerial Imagery via LAbel Number Distribution Consistency

TLDR
This paper proposes an incremental learning method named SIL-LAND, which improves the accuracy by making the LAND of the method close to that of static learning, and proposes a similarity measure module to increase the intraclass similarity between the prototype and the corresponding feature vectors.

Class-Incremental Learning for Wireless Device Identification in IoT

TLDR
A new metric is provided to measure thedegree of topological maturity of DNN models from the degree of conflict of class-specific fingerprints in IL-enabled NDI systems, and a new channel separation-enabled IL (CSIL) scheme without using historical data is proposed, which can automatically separate devices’ fingerprints in different learning stages and avoid potential conflict.

References

SHOWING 1-10 OF 37 REFERENCES

ScaIL: Classifier Weights Scaling for Class Incremental Learning

TLDR
This work proposes simple but efficient scaling of past classifiers’ weights to make them more comparable to those of new classes and questions the utility of the widely used distillation loss component of incremental learning algorithms by comparing it to vanilla fine tuning in presence of a bounded memory.

IL2M: Class Incremental Learning With Dual Memory

TLDR
This paper presents a class incremental learning method which exploits fine tuning and a dual memory to reduce the negative effect of catastrophic forgetting in image recognition and shows that the proposed approach is more effective than a range of competitive state-of-the-art methods.

Learning a Unified Classifier Incrementally via Rebalancing

TLDR
This work develops a new framework for incrementally learning a unified classifier, e.g. a classifier that treats both old and new classes uniformly, and incorporates three components, cosine normalization, less-forget constraint, and inter-class separation, to mitigate the adverse effects of the imbalance.

A Simple Class Decision Balancing for Incremental Learning

TLDR
This scheme, dubbed as SS-IL, is shown to give much more balanced class decisions, have much less biased scores, and outperform strong state-of-the-art baselines on several large-scale benchmark datasets, without any sophisticated post-processing of the scores.

End-to-End Incremental Learning

TLDR
This work proposes an approach to learn deep neural networks incrementally, using new data and only a small exemplar set corresponding to samples from the old classes, based on a loss composed of a distillation measure to retain the knowledge acquired from theold classes, and a cross-entropy loss to learn the new classes.

Revisiting Distillation and Incremental Classifier Learning

TLDR
This paper thoroughly analyzes the current state of the art (iCaRL) method for incremental learning and concludes that the success of iCaRL is primarily due to knowledge distillation, and proposes a dynamic threshold moving algorithm that is able to successfully remove this bias.

Exemplar-Supported Generative Reproduction for Class Incremental Learning

TLDR
This paper uses Generative Adversarial Networks (GANs) to model the underlying distributions of old classes and select additional real exemplars as anchors to support the learned distribution and proves that the method has superior performance against state-of-the-arts.

Continual learning with hypernetworks

TLDR
Insight is provided into the structure of low-dimensional task embedding spaces (the input space of the hypernetwork) and it is shown that task-conditioned hypernetworks demonstrate transfer learning.

Learning without Forgetting

TLDR
This work proposes the Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities, and performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques.

iCaRL: Incremental Classifier and Representation Learning

TLDR
iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail, and distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures.