Corpus ID: 54443381

Efficient Lifelong Learning with A-GEM

@article{Chaudhry2019EfficientLL,
  title={Efficient Lifelong Learning with A-GEM},
  author={Arslan Chaudhry and Marc'Aurelio Ranzato and Marcus Rohrbach and Mohamed Elhoseiny},
  journal={ArXiv},
  year={2019},
  volume={abs/1812.00420}
}
In lifelong learning, the learner is presented with a sequence of tasks, incrementally building a data-driven prior which may be leveraged to speed up learning of a new task. [...] Key Method Towards this end, we first introduce a new and a more realistic evaluation protocol, whereby learners observe each example only once and hyper-parameter selection is done on a small and disjoint set of tasks, which is not used for the actual learning experience and evaluation.Expand
Towards a robust experimental framework and benchmark for lifelong language learning
In lifelong learning, a model learns different tasks sequentially throughout its 1 lifetime. State-of-the-art deep learning models, however, struggle to generalize 2 in this setting and suffer fromExpand
Efficient Meta Lifelong-Learning with Limited Memory
TLDR
This paper identifies three common principles of lifelong learning methods and proposes an efficient meta-lifelong framework that combines them in a synergistic fashion and alleviates both catastrophic forgetting and negative transfer at the same time. Expand
Lifelong Generative Modeling
TLDR
This work focuses on a lifelong learning approach to unsupervised generative modeling, where the student model leverages the information learned by the teacher, which acts as a probabilistic knowledge store, and reduces the effect of catastrophic interference that appears when the authors learn over sequences of distributions. Expand
iTAML: An Incremental Task-Agnostic Meta-learning Approach
TLDR
A novel meta-learning approach that seeks to maintain an equilibrium between all the encountered tasks, ensured by a new meta-update rule which avoids catastrophic forgetting and is task-agnostic. Expand
Online Coreset Selection for Rehearsal-based Continual Learning
TLDR
This work proposes Online Coreset Selection (OCS), a simple yet effective method that selects the most representative and informative coreset at each iteration and trains them in an online manner, demonstrating that it improves task adaptation and prevents catastrophic forgetting in a sample-efficient manner. Expand
Generalising via Meta-Examples for Continual Learning in the Wild
TLDR
This work introduces FUSION Few-shot UnSupervIsed cONtinual learning a novel strategy which aims to deal with neural networks that “learn in the wild”, simulating a real distribution and flow of unbalanced tasks. Expand
Calibrating CNNs for Lifelong Learning
TLDR
An approach for lifelong/continual learning of convolutional neural networks (CNN) that does not suffer from the problem of catastrophic forgetting when moving from one task to the other and is immune to catastrophic forgetting. Expand
Hyper-LifelongGAN: Scalable Lifelong Learning for Image Conditioned Generation
  • Mengyao Zhai, Lei Chen, Greg Mori
  • Computer Science
  • 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2021
TLDR
This paper validate Hyper-LifelongGAN on diverse image-conditioned generation tasks, extensive ablation studies and comparisons with state-of-the-art models are carried out to show that the proposed approach can address catastrophic forgetting effectively. Expand
ING WITH ADDITIVE PARAMETER DECOMPOSITION
While recent continual learning methods largely alleviate the catastrophic problem on toy-sized datasets, some issues remain to be tackled to apply them to real-world problem domains. First, aExpand
DRILL: Dynamic Representations for Imbalanced Lifelong Learning
TLDR
DRILL is the first of its kind to use a self-organizing neural architecture for open-domain lifelong learning in NLP. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 32 REFERENCES
Expert Gate: Lifelong Learning with a Network of Experts
TLDR
A model of lifelong learning, based on a Network of Experts, with a set of gating autoencoders that learn a representation for the task at hand, and, at test time, automatically forward the test sample to the relevant expert. Expand
Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence
TLDR
This work introduces two metrics to quantify forgetting and intransigence that allow for better insights into the behaviour of IL algorithms and presents RWalk, a generalization of EWC++ and Path Integral with a theoretically grounded KL-divergence based perspective. Expand
Using Task Features for Zero-Shot Knowledge Transfer in Lifelong Learning
TLDR
It is shown that using task descriptors improves the performance of the learned task policies, providing both theoretical justification for the benefit and empirical demonstration of the improvement across a variety of dynamical control problems. Expand
Gradient Episodic Memory for Continual Learning
TLDR
A model for continual learning, called Gradient Episodic Memory (GEM) is proposed that alleviates forgetting, while allowing beneficial transfer of knowledge to previous tasks. Expand
Zero-Shot Learning—A Comprehensive Evaluation of the Good, the Bad and the Ugly
TLDR
A new zero-shot learning dataset is proposed, the Animals with Attributes 2 (AWA2) dataset which is made publicly available both in terms of image features and the images themselves and compares and analyzes a significant number of the state-of-the-art methods in depth. Expand
Progress & Compress: A scalable framework for continual learning
TLDR
The progress & compress approach is demonstrated on sequential classification of handwritten alphabets as well as two reinforcement learning domains: Atari games and 3D maze navigation. Expand
Continual Learning with Deep Generative Replay
TLDR
The Deep Generative Replay is proposed, a novel framework with a cooperative dual model architecture consisting of a deep generative model ("generator") and a task solving model ("solver"), with only these two models, training data for previous tasks can easily be sampled and interleaved with those for a new task. Expand
Memory Aware Synapses: Learning what (not) to forget
TLDR
This paper argues that, given the limited model capacity and the unlimited new information to be learned, knowledge has to be preserved or erased selectively and proposes a novel approach for lifelong learning, coined Memory Aware Synapses (MAS), which computes the importance of the parameters of a neural network in an unsupervised and online manner. Expand
Reinforced Continual Learning
TLDR
A novel approach for continual learning is proposed, which searches for the best neural architecture for each coming task via sophisticatedly designed reinforcement learning strategies, and which outperforms existing continual learning alternatives for deep networks. Expand
Horde: a scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction
TLDR
Results using Horde on a multi-sensored mobile robot to successfully learn goal-oriented behaviors and long-term predictions from off-policy experience are presented. Expand
...
1
2
3
4
...