Corpus ID: 236428455

In Defense of the Learning Without Forgetting for Task Incremental Learning

@article{Oren2021InDO,
  title={In Defense of the Learning Without Forgetting for Task Incremental Learning},
  author={Guy Oren and Lior Wolf},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.12304}
}
Catastrophic forgetting is one of the major challenges on the road for continual learning systems, which are presented with an on-line stream of tasks. The field has attracted considerable interest and a diverse set of methods have been presented for overcoming this challenge. Learning without Forgetting (LwF) is one of the earliest and most frequently cited methods. It has the advantages of not requiring the storage of samples from the previous tasks, of implementation simplicity, and of being… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 35 REFERENCES
Overcoming catastrophic forgetting with hard attention to the task
TLDR
A task-based hard attention mechanism that preserves previous tasks' information without affecting the current task's learning, and features the possibility to control both the stability and compactness of the learned knowledge, which makes it also attractive for online learning or network compression applications. Expand
Continual learning: A comparative study on how to defy forgetting in classification tasks
TLDR
This work focuses on task-incremental classification, where tasks arrive in a batch-like fashion, and are delineated by clear boundaries, and studies the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, to compare methods in terms of required memory, computation time and storage. Expand
Learn to Grow: A Continual Structure Learning Framework for Overcoming Catastrophic Forgetting
TLDR
By separating the explicit neural structure learning and the parameter estimation, the proposed method is capable of evolving neural structures in an intuitively meaningful way, but also shows strong capabilities of alleviating catastrophic forgetting in experiments. Expand
Continual learning with hypernetworks
TLDR
Insight is provided into the structure of low-dimensional task embedding spaces (the input space of the hypernetwork) and it is shown that task-conditioned hypernetworks demonstrate transfer learning. Expand
Encoder Based Lifelong Learning
TLDR
A new lifelong learning solution where a single model is trained for a sequence of tasks, aimed at preserving the knowledge of the previous tasks while learning a new one by using autoencoders. Expand
Overcoming Catastrophic Forgetting by Incremental Moment Matching
TLDR
IMM incrementally matches the moment of the posterior distribution of the neural network which is trained on the first and the second task, respectively to make the search space of posterior parameter smooth. Expand
Gradient Episodic Memory for Continual Learning
TLDR
A model for continual learning, called Gradient Episodic Memory (GEM) is proposed that alleviates forgetting, while allowing beneficial transfer of knowledge to previous tasks. Expand
Overcoming Catastrophic Forgetting for Continual Learning via Model Adaptation
TLDR
This paper proposes a very different approach, called Parameter Generation and Model Adaptation (PGMA), to dealing with the problem of catastrophic forgetting in standard neural network architectures. Expand
Three scenarios for continual learning
TLDR
Three continual learning scenarios are described based on whether at test time task identity is provided and--in case it is not--whether it must be inferred, and it is found that regularization-based approaches fail and that replaying representations of previous experiences seems required for solving this scenario. Expand
Overcoming catastrophic forgetting in neural networks
TLDR
It is shown that it is possible to overcome the limitation of connectionist models and train networks that can maintain expertise on tasks that they have not experienced for a long time and selectively slowing down learning on the weights important for previous tasks. Expand
...
1
2
3
4
...