Continual Lifelong Learning with Neural Networks: A Review

@article{Parisi2019ContinualLL,
  title={Continual Lifelong Learning with Neural Networks: A Review},
  author={German Ignacio Parisi and Ronald Kemker and Jose L. Part and Christopher Kanan and Stefan Wermter},
  journal={Neural networks : the official journal of the International Neural Network Society},
  year={2019},
  volume={113},
  pages={
          54-71
        }
}

Figures from this paper

Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-Organization
TLDR
The proposed dual-memory self-organizing architecture is evaluated on the CORe50 benchmark dataset for continuous object recognition, showing that it significantly outperform current methods of lifelong learning in three different incremental learning scenarios.
A Unified Framework for Lifelong Learning in Deep Neural Networks
TLDR
This paper proposes a simple yet powerful unified framework that demonstrates all of these desirable properties of lifelong learning, including non-forgetting, concept rehearsal, forward transfer and backward transfer of knowledge, and so on.
MENTARY LEARNING SYSTEM
TLDR
This work proposes CLS-ER, a novel dual memory experience replay (ER) method which maintains short-term and long-term semantic memories that interact with the episodic memory which achieves state-of-the-art performance on standard benchmarks as well as more realistic general continual learning settings.
Online Continual Learning on Sequences
TLDR
This chapter summarizes and discusses recent deep learning models that address OCL on sequential input through the use (and combination) of synaptic regularization, structural plasticity, and experience replay.
Towards continual task learning in artificial neural networks: current approaches and insights from neuroscience
TLDR
The influence of network architecture on task performance and representational interference will be addressed, providing a broad critical appraisal of current approaches to continual learning, while interrogating the extent to which insight might be provided by the rich literature of learning and memory in neuroscience.
Learning Fast, Learning Slow: A General Continual Learning Method based on Complementary Learning System
TLDR
This work proposes CLS-ER, a novel dual memory experience replay (ER) method which maintains short-term and long-term semantic memories that interact with the episodic memory which achieves state-of-the-art performance on standard bench-marks as well as more realistic general continual learning settings.
Learning with Long-term Remembering: Following the Lead of Mixed Stochastic Gradient
TLDR
A novel and effective lifelong learning algorithm, calledMixEd stochastic GrAdient (MEGA), which allows deep neural networks to ac-quire the ability of retaining performance on old tasks while learning new tasks.
Enabling Continual Learning with Differentiable Hebbian Plasticity
TLDR
A Differentiable Hebbian Consolidation model which is composed of a DHP Softmax layer that adds a rapid learning plastic component to the fixed parameters of the softmax output layer; enabling learned representations to be retained for a longer timescale is proposed.
Learn More, Forget Less: Cues from Human Brain
TLDR
NeoNet is presented, a simple yet effective method that is motivated by recent findings in computational neuroscience on the process of long-term memory consolidation in humans that relies on a pseudorehearsal strategy to model the working of relevant sections of the brain that are associated with long- term memory consolidation processes.
Meta-Consolidation for Continual Learning
TLDR
The authors' experiments with continual learning benchmarks of MNIST, CifAR-10, CIFAR-100 and Mini-ImageNet datasets show consistent improvement over five baselines, including a recent state-of-the-art, corroborating the promise of MERLIN.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 258 REFERENCES
Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-Organization
TLDR
The proposed dual-memory self-organizing architecture is evaluated on the CORe50 benchmark dataset for continuous object recognition, showing that it significantly outperform current methods of lifelong learning in three different incremental learning scenarios.
Deep Generative Dual Memory Network for Continual Learning
TLDR
This model consists of a dual memory architecture to emulate the complementary learning systems (hippocampus and the neocortex) in the human brain, and maintains a consolidated long-term memory via generative replay of past experiences.
Lifelong learning of human actions with deep neural network self-organization
On the role of neurogenesis in overcoming catastrophic forgetting
TLDR
This work demonstrates that dynamically grown networks outperform static networks in incremental learning scenarios, even when bounded by the same amount of memory in both cases, reinforcing that structural plasticity constitutes effective prevention against catastrophic forgetting in non-stationary environments.
Overcoming catastrophic forgetting in neural networks
TLDR
It is shown that it is possible to overcome the limitation of connectionist models and train networks that can maintain expertise on tasks that they have not experienced for a long time and selectively slowing down learning on the weights important for previous tasks.
Critical Learning Periods in Deep Neural Networks
TLDR
It is suggested that the initial learning transient, under-scrutinized compared to asymptotic behavior, plays a key role in determining the outcome of the training process, and that forgetting is critical to achieving invariance and disentanglement in representation learning.
Gradient Episodic Memory for Continual Learning
TLDR
A model for continual learning, called Gradient Episodic Memory (GEM) is proposed that alleviates forgetting, while allowing beneficial transfer of knowledge to previous tasks.
Measuring Catastrophic Forgetting in Neural Networks
TLDR
New metrics and benchmarks for directly comparing five different mechanisms designed to mitigate catastrophic forgetting in neural networks: regularization, ensembling, rehearsal, dual-memory, and sparse-coding are introduced.
Continual Learning with Deep Generative Replay
TLDR
The Deep Generative Replay is proposed, a novel framework with a cooperative dual model architecture consisting of a deep generative model ("generator") and a task solving model ("solver"), with only these two models, training data for previous tasks can easily be sampled and interleaved with those for a new task.
A Bio-Inspired Incremental Learning Architecture for Applied Perceptual Problems
TLDR
A biologically inspired architecture for incremental learning that remains resource-efficient even in the face of very high data dimensionalities (>1000) that are typically associated with perceptual problems is presented and how a new perceptual (object) class can be added to a trained architecture without retraining is investigated.
...
1
2
3
4
5
...