Encoder Based Lifelong Learning

@article{Triki2017EncoderBL,
  title={Encoder Based Lifelong Learning},
  author={A. Triki and Rahaf Aljundi and Matthew B. Blaschko and Tinne Tuytelaars},
  journal={2017 IEEE International Conference on Computer Vision (ICCV)},
  year={2017},
  pages={1329-1337}
}
This paper introduces a new lifelong learning solution where a single model is trained for a sequence of tasks. The main challenge that vision systems face in this context is catastrophic forgetting: as they tend to adapt to the most recently seen task, they lose performance on the tasks that were learned previously. Our method aims at preserving the knowledge of the previous tasks while learning a new one by using autoencoders. For each task, an under-complete autoencoder is learned, capturing… 

Figures and Tables from this paper

Adaptive Compression-based Lifelong Learning
TLDR
This work proposes a method based on Bayesian optimization to perform adaptive compression/pruning of the network and shows its effectiveness in lifelong learning, and demonstrates the applicability of learning network compression, where it is able to effectively preserve performances along sequences of tasks of varying complexity.
Continual Learning with Lifelong Vision Transformer
Continual learning methods aim at training a neural network from sequential data with streaming labels, reliev-ing catastrophic forgetting. However, existing methods are based on and designed for
Self-Net: Lifelong Learning via Continual Self-Modeling
TLDR
This work proposes a novel framework, Self-Net, that uses an autoencoder to learn a set of low-dimensional representations of the weights learned for different tasks, and is the first to use autoencoders to sequentially encode sets of network weights to enable continual learning.
Incremental Learning in Online Scenario
TLDR
This paper proposes an incremental learning framework that can work in the challenging online learning scenario and handle both new classes data and new observations of old classes and demonstrates a real life application of online food image classification based on the complete framework using the Food-101 dataset.
Autoencoders Covering Space as a Life-Long Classifier
TLDR
This work interprets autoencoders as manifolds that can be trained to contain or exclude given points from the input space and proposes a novel method for learning an ensemble of specialized autoen coders.
Meta Continual Learning
TLDR
This paper proposes a learning to optimize algorithm for mitigating catastrophic forgetting, and proposes to train another neural network to predict parameter update steps that respect the importance of parameters to the previous tasks.
Computer Vision – ACCV 2018
TLDR
This paper thoroughly analyzes the current state of the art (iCaRL) method for incremental learning and concludes that the success of iCaRL is primarily due to knowledge distillation and proposes a dynamic threshold moving algorithm that is able to successfully remove this bias.
Adversarial Feature Alignment: Avoid Catastrophic Forgetting in Incremental Task Lifelong Learning
TLDR
Inspired by the learning process of students, who usually decompose complex tasks into easier goals, an adversarial feature alignment method is proposed to avoid catastrophic forgetting and outperforms the state-of-the-art methods in both accuracy on new tasks and performance preservation on old tasks.
OvA-INN: Continual Learning with Invertible Neural Networks
TLDR
A new method, OvA-INN, which is able to learn one class at a time and without storing any of the previous data is proposed, able to outperform state-of-the-art approaches that rely on features learning for the Continual Learning of MNIST and CIFAR-100 datasets.
Learning a Unified Classifier Incrementally via Rebalancing
TLDR
This work develops a new framework for incrementally learning a unified classifier, e.g. a classifier that treats both old and new classes uniformly, and incorporates three components, cosine normalization, less-forget constraint, and inter-class separation, to mitigate the adverse effects of the imbalance.
...
...

References

SHOWING 1-10 OF 35 REFERENCES
Learning without Forgetting
TLDR
This work proposes the Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities, and performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques.
Expert Gate: Lifelong Learning with a Network of Experts
TLDR
A model of lifelong learning, based on a Network of Experts, with a set of gating autoencoders that learn a representation for the task at hand, and, at test time, automatically forward the test sample to the relevant expert.
Overcoming catastrophic forgetting in neural networks
TLDR
It is shown that it is possible to overcome the limitation of connectionist models and train networks that can maintain expertise on tasks that they have not experienced for a long time and selectively slowing down learning on the weights important for previous tasks.
iCaRL: Incremental Classifier and Representation Learning
TLDR
iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail, and distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures.
Multitask Learning
  • R. Caruana
  • Computer Science
    Encyclopedia of Machine Learning and Data Mining
  • 1998
TLDR
Prior work on MTL is reviewed, new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals is presented, and new results for MTL with k-nearest neighbor and kernel regression are presented.
DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition
TLDR
DeCAF, an open-source implementation of deep convolutional activation features, along with all associated network parameters, are released to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.
An Empirical Investigation of Catastrophic Forgeting in Gradient-Based Neural Networks
TLDR
It is found that it is always best to train using the dropout algorithm--the drop out algorithm is consistently best at adapting to the new task, remembering the old task, and has the best tradeoff curve between these two extremes.
Distilling the Knowledge in a Neural Network
TLDR
This work shows that it can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model and introduces a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse.
ImageNet classification with deep convolutional neural networks
TLDR
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.
Lifelong Machine Learning Systems: Beyond Learning Algorithms
TLDR
It is proposed that it is now appropriate for the AI community to move beyond learning algorithms to more seriously consider the nature of systems that are capable of learning over a lifetime.
...
...