Using Hindsight to Anchor Past Knowledge in Continual Learning

@article{Chaudhry2021UsingHT,
  title={Using Hindsight to Anchor Past Knowledge in Continual Learning},
  author={Arslan Chaudhry and Albert Gordo and Puneet Kumar Dokania and Philip H. S. Torr and David Lopez-Paz},
  journal={ArXiv},
  year={2021},
  volume={abs/2002.08165}
}
In continual learning, the learner faces a stream of data whose distribution changes over time. Modern neural networks are known to suffer under this setting, as they quickly forget previously acquired knowledge. To address such catastrophic forgetting, many continual learning methods implement different types of experience replay, re-learning on past data stored in a small buffer known as episodic memory. In this work, we complement experience replay with a new objective that we call… 

Figures and Tables from this paper

Revisiting Experience Replay: Continual Learning by Adaptively Tuning Task-wise Relationship

A meta learning algorithm based on bi-level optimization to adaptively tune the relationship between the knowledge extracted from the past and new tasks is proposed and can find an appropriate direction of gradient during continual learning and avoid the serious overfitting problem on memory buffer.

Effects of Auxiliary Knowledge on Continual Learning

This paper proposes a new, simple, CL algorithm that focuses on solving the current task in a way that might facilitate the learning of the next ones, and can outperform existing state-of-the-art models on the most common CL Image Classification benchmarks.

Rethinking Experience Replay: a Bag of Tricks for Continual Learning

This work shows that naive rehearsal can be patched to achieve similar performance to current state-of-the-art rehearsal-based methods, and points out some shortcomings that restrain Experience Replay (ER) and proposes five tricks to mitigate them.

Learning to Prompt for Continual Learning

This work presents a new paradigm for continual learning that aims to train a more succinct memory system without accessing task identity at test time, and achieves competitive results against rehearsal-based methods even without a re-hearsal buffer.

Understanding the Role of Training Regimes in Continual Learning

This work hypothesizes that the geometrical properties of the local minima found for each task play an important role in the overall degree of forgetting, and studies the effect of dropout, learning rate decay, and batch size, on forming training regimes that widen the tasks'Local minima and consequently, on helping it not to forget catastrophically.

Saliency Guided Experience Packing for Replay in Continual Learning

This paper proposes a new approach for experience replay, where the past experiences are selected by looking at the saliency maps which provide visual explanations for the model’s decision, and captures richer summaries of past experiences without any memory increase, and hence performs well with small episodic memory.

Continual Learning through Retrieval and Imagination

DRI performs significantly better than the existing state-of-the-art continual learning methods and effectively alleviates catastrophic forgetting, and can reduce the loss approximation error and improve the robustness through retrieval and imagination, bringing better generalizability to the network.

Mitigating Forgetting in Online Continual Learning with Neuron Calibration

A novel method which attempts to mitigate catastrophic forgetting in online continual learning from a new perspective, i.e., neuron calibration is presented, which is lightweight and applicable to general feed-forward neural networks-based models.

Dark Experience for General Continual Learning: a Strong, Simple Baseline

This work works towards General Continual Learning (GCL), where task boundaries blur and the domain and class distributions shift either gradually or suddenly, through Dark Experience Replay, namely matching the network's logits sampled throughout the optimization trajectory, thus promoting consistency with its past.

Navigating Memory Construction by Global Pseudo-Task Simulation for Continual Learning

The dynamic memory construction in ER is formulated as a combinatorial optimization problem, which aims at directly minimizing the global loss across all experienced tasks and the Global Pseudo-task Simulation (GPS) is proposed, which mimics future catastrophic forgetting of the current task by permutation.
...

References

SHOWING 1-10 OF 45 REFERENCES

Continual Learning with Tiny Episodic Memories

It is observed that a very simple baseline, which jointly trains on both examples from the current task as well as examples stored in the memory, outperforms state-of-the-art CL approaches with and without episodic memory.

Experience Replay for Continual Learning

This work shows that using experience replay buffers for all past events with a mixture of on- and off-policy learning can still learn new tasks quickly yet can substantially reduce catastrophic forgetting in both Atari and DMLab domains, even matching the performance of methods that require task identities.

On Tiny Episodic Memories in Continual Learning

This work empirically analyze the effectiveness of a very small episodic memory in a CL setup where each training example is only seen once and finds that repetitive training on even tiny memories of past tasks does not harm generalization, on the contrary, it improves it.

Online Continual Learning with Maximally Interfered Retrieval

This work considers a controlled sampling of memories for replay, and shows a formulation for this sampling criterion in both the generative replay and the experience replay setting, producing consistent gains in performance and greatly reduced forgetting.

Gradient based sample selection for online continual learning

This work formulation of sample selection as a constraint reduction problem based on the constrained optimization view of continual learning shows that it is equivalent to maximizing the diversity of samples in the replay buffer with parameters gradient as the feature.

Gradient Episodic Memory for Continual Learning

A model for continual learning, called Gradient Episodic Memory (GEM) is proposed that alleviates forgetting, while allowing beneficial transfer of knowledge to previous tasks.

Learning to Learn without Forgetting By Maximizing Transfer and Minimizing Interference

This work proposes a new conceptualization of the continual learning problem in terms of a temporally symmetric trade-off between transfer and interference that can be optimized by enforcing gradient alignment across examples, and introduces a new algorithm, Meta-Experience Replay, that directly exploits this view by combining experience replay with optimization based meta-learning.

Online continual learning with no task boundaries

This paper develops a solution to select a fixed number of constraints that they use to approximate the feasible region defined by the original constraints, and compares this approach against the methods that rely on task boundaries toSelect a fixed set of examples, and shows comparable or even better results.

Selective Experience Replay for Lifelong Learning

Overall, the results show that selective experience replay, when suitable selection algorithms are employed, can prevent catastrophic forgetting and is consistently the best approach on all domains tested.

Memory Efficient Experience Replay for Streaming Learning

It is found that full rehearsal can eliminate catastrophic forgetting in a variety of streaming learning settings, with ExStream performing well using far less memory and computation.