Knowledge Capture and Replay for Continual Learning

@article{Gopalakrishnan2022KnowledgeCA,
  title={Knowledge Capture and Replay for Continual Learning},
  author={Saisubramaniam Gopalakrishnan and Pranshu Ranjan Singh and Haytham M. Fayek and Savitha Ramasamy and Arulmurugan Ambikapathi},
  journal={2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
  year={2022},
  pages={337-345}
}
Deep neural networks model data for a task or a sequence of tasks, where the knowledge extracted from the data is encoded in the parameters and representations of the network. Extraction and utilization of these representations is vital when data is no longer available in the future, especially in a continual learning scenario. We introduce flashcards, which are visual representations that capture the encoded knowledge of a network as a recursive function of some predefined random image… 
Continual SLAM: Beyond Lifelong Simultaneous Localization and Mapping through Continual Learning
TLDR
This work proposes CL-SLAM leveraging a dual-network architecture to both adapt to new environments and retain knowledge with respect to previously visited environments to extend the concept of lifelong SLAM from a single dynamically changing environment to sequential deployments in several drastically differ-ing environments.
Robust Continual Learning through a Comprehensively Progressive Bayesian Neural Network
TLDR
The demonstrations and the performance results show that the proposed strategies for progressive BNN enable robust continual learning.

References

SHOWING 1-10 OF 42 REFERENCES
Memory Efficient Experience Replay for Streaming Learning
TLDR
It is found that full rehearsal can eliminate catastrophic forgetting in a variety of streaming learning settings, with ExStream performing well using far less memory and computation.
Uncertainty-guided Continual Learning with Bayesian Neural Networks
TLDR
Uncertainty-guided Continual Bayesian Neural Networks (UCB) is proposed, where the learning rate adapts according to the uncertainty defined in the probability distribution of the weights in networks.
Continual Learning with Deep Generative Replay
TLDR
The Deep Generative Replay is proposed, a novel framework with a cooperative dual model architecture consisting of a deep generative model ("generator") and a task solving model ("solver"), with only these two models, training data for previous tasks can easily be sampled and interleaved with those for a new task.
Gradient Episodic Memory for Continual Learning
TLDR
A model for continual learning, called Gradient Episodic Memory (GEM) is proposed that alleviates forgetting, while allowing beneficial transfer of knowledge to previous tasks.
Generative Continual Concept Learning
TLDR
A computational model is developed that is able to expand its previously learned concepts efficiently to new domains using a few labeled samples and couple the new form of a concept to its past learned forms in an embedding space for effective continual learning.
Learning to Remember: A Synaptic Plasticity Driven Framework for Continual Learning
TLDR
DGM relies on conditional generative adversarial networks with learnable connection plasticity realized with neural masking, and a dynamic network expansion mechanism is proposed that ensures sufficient model capacity to accommodate for continually incoming tasks.
Online Continual Learning with Maximally Interfered Retrieval
TLDR
This work considers a controlled sampling of memories for replay, and shows a formulation for this sampling criterion in both the generative replay and the experience replay setting, producing consistent gains in performance and greatly reduced forgetting.
Learning without Forgetting
TLDR
This work proposes the Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities, and performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques.
Compacting, Picking and Growing for Unforgetting Continual Learning
TLDR
This paper introduces an incremental learning method that is scalable to the number of sequential tasks in a continual learning process and shows that the knowledge accumulated through learning previous tasks is helpful to build a better model for the new tasks compared to training the models independently with tasks.
Continual Lifelong Learning with Neural Networks: A Review
...
1
2
3
4
5
...