Corpus ID: 210911896

An Adaptive Random Path Selection Approach for Incremental Learning.

  title={An Adaptive Random Path Selection Approach for Incremental Learning.},
  author={Jathushan Rajasegaran and Munawar Hayat and Salman Hameed Khan and Fahad Shahbaz Khan and Ling Shao and Ming-Hsuan Yang},
  journal={arXiv: Computer Vision and Pattern Recognition},
In a conventional supervised learning setting, a machine learning model has access to examples of all object classes that are desired to be recognized during the inference stage. This results in a fixed model that lacks the flexibility to adapt to new learning tasks. In practical settings, learning tasks often arrive in a sequence and the models must continually learn to increment their previously acquired knowledge. Existing incremental learning approaches fall well below the state-of-the-art… Expand
Continual-wav2vec2: an Application of Continual Learning for Self-Supervised Automatic Speech Recognition
This work tackles the problem of learning new language representations continually from audio without forgetting a previous language representation and uses ideas from continual learning to transfer knowledge from a previous task to speed up pretraining a new language task. Expand
DyTox: Transformers for Continual Learning with DYnamic TOken eXpansion
This paper proposes a transformer architecture based on a dedicated encoder/decoder framework that reaches excellent results on CIFAR100 and state-of-the-art performances on the largescale ImageNet100 and ImageNet1000 while having less parameters than concurrent dynamic frameworks. Expand
Reviewing continual learning from the perspective of human-level intelligence
  • Yifan Chang, Wenbo Li, +7 authors Haifeng Li
  • Computer Science
  • ArXiv
  • 2021
This paper surveys CL from a more macroscopic perspective based on the Stability Versus Plasticity mechansim and recheck CL from the level of artificial general intelligence. Expand
BI-MAML: Balanced Incremental Approach for Meta Learning
The Balanced Incremental Model Agnostic Meta Learning system enables BI-MAML to both outperform other state-of-the-art models in terms of classification accuracy for existing tasks and also accomplish efficient adaption to similar new tasks with less required shots. Expand
iTAML: An Incremental Task-Agnostic Meta-learning Approach
A novel meta-learning approach that seeks to maintain an equilibrium between all the encountered tasks, ensured by a new meta-update rule which avoids catastrophic forgetting and is task-agnostic. Expand


Learning without Forgetting
  • Zhizhong Li, Derek Hoiem
  • Computer Science, Mathematics
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2018
This work proposes the Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities, and performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques. Expand
Overcoming catastrophic forgetting in neural networks
It is shown that it is possible to overcome the limitation of connectionist models and train networks that can maintain expertise on tasks that they have not experienced for a long time and selectively slowing down learning on the weights important for previous tasks. Expand
iCaRL: Incremental Classifier and Representation Learning
iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail, and distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures. Expand
Error-Driven Incremental Learning in Deep Convolutional Neural Network for Large-Scale Image Classification
A training algorithm is developed that grows a network not only incrementally but also hierarchically, and divided into component models that predict coarse-grained superclasses and those return final prediction within a superclass. Expand
Class-incremental Learning via Deep Model Consolidation
A class-incremental learning paradigm called Deep Model Consolidation (DMC), which works well even when the original training data is not available, and demonstrates significantly better performance in image classification and object detection in the single-headed IL setting. Expand
Continual Lifelong Learning with Neural Networks: A Review
This review critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. Expand
Exploring Randomly Wired Neural Networks for Image Recognition
The results suggest that new efforts focusing on designing better network generators may lead to new breakthroughs by exploring less constrained search spaces with more room for novel design. Expand
Fine-Tuning CNN Image Retrieval with No Human Annotation
It is shown that both hard-positive and hard-negative examples, selected by exploiting the geometry and the camera positions available from the 3D models, enhance the performance of particular-object retrieval. Expand
IL2M: Class Incremental Learning With Dual Memory
This paper presents a class incremental learning method which exploits fine tuning and a dual memory to reduce the negative effect of catastrophic forgetting in image recognition and shows that the proposed approach is more effective than a range of competitive state-of-the-art methods. Expand
Incremental Learning Using Conditional Adversarial Networks
Comparison with the state-of-the-arts on public CIFAR-100 and CUB-200 datasets shows that the proposed incremental learning strategy achieves the best accuracies on both old and new classes while requiring relatively less memory storage. Expand