Corpus ID: 236772288

Sequoia: A Software Framework to Unify Continual Learning Research

@article{Normandin2021SequoiaAS,
  title={Sequoia: A Software Framework to Unify Continual Learning Research},
  author={Fabrice Normandin and Florian Golemo and Oleksiy Ostapenko and Pau Rodr{\'i}guez and Matthew Riemer and Julio Hurtado and Khimya Khetarpal and Dominic Zhao and Ryan Lindeborg and Timoth{\'e}e Lesort and Laurent Charlin and Irina Rish and Massimo Caccia},
  journal={ArXiv},
  year={2021},
  volume={abs/2108.01005}
}
The field of Continual Learning (CL) seeks to develop algorithms that accumulate knowledge and skills over time through interaction with nonstationary environments and data distributions. Measuring progress in CL can be difficult because a plethora of evaluation procedures (settings) and algorithmic solutions (methods) have emerged, each with their own potentially disjoint set of assumptions about the CL problem. In this work, we view each setting as a set of assumptions. We then create a tree… Expand

Figures and Tables from this paper

CORA: Benchmarks, Baselines, and Metrics as a Platform for Continual Reinforcement Learning Agents
TLDR
This work presents CORA, a platform for Continual Reinforcement Learning Agents that provides benchmarks, baselines, and metrics in a single code package, and hopes that the continual RL community can benefit from contributions, to accelerate the development of new continual RL algorithms. Expand
Continual Learning via Local Module Composition
Modularity is a compelling solution to continual learning (CL), the problem of modeling sequences of related tasks. Learning and then composing modules to solve different tasks provides anExpand

References

SHOWING 1-10 OF 62 REFERENCES
GDumb: A Simple Approach that Questions Our Progress in Continual Learning
We discuss a general formulation for the Continual Learning (CL) problem for classification—a learning task where a stream provides samples to a learner and the goal of the learner, depending on theExpand
Experience Replay for Continual Learning
TLDR
This work shows that using experience replay buffers for all past events with a mixture of on- and off-policy learning can still learn new tasks quickly yet can substantially reduce catastrophic forgetting in both Atari and DMLab domains, even matching the performance of methods that require task identities. Expand
Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges
TLDR
This paper aims at reviewing the existing state of the art of continual learning, summarizing existing benchmarks and metrics, and proposing a framework for presenting and evaluating both robotics and non robotics approaches in a way that makes transfer between both fields easier. Expand
Three scenarios for continual learning
TLDR
Three continual learning scenarios are described based on whether at test time task identity is provided and--in case it is not--whether it must be inferred, and it is found that regularization-based approaches fail and that replaying representations of previous experiences seems required for solving this scenario. Expand
Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning
TLDR
This work defines a novel method of multitask and transfer learning that enables an autonomous agent to learn how to behave in multiple tasks simultaneously, and then generalize its knowledge to new domains, and uses Atari games as a testing environment to demonstrate these methods. Expand
Continuum: Simple Management of Complex Continual Learning Scenarios
TLDR
This work proposes a simple and efficient framework with numerous data loaders that avoid researcher to spend time on designing data loader and eliminate time-consuming errors, and is easily extendable to add novel settings for specific needs. Expand
Online Fast Adaptation and Knowledge Accumulation (OSAKA): a New Approach to Continual Learning
TLDR
It is shown in an empirical study that ContinualMAML, an online extension of the popular MAML algorithm, is better suited to the new scenario than the aforementioned methodologies including standard continual learning and meta-learning approaches. Expand
DisCoRL: Continual Reinforcement Learning via Policy Distillation
TLDR
This paper proposes DisCoRL, an approach combining state representation learning and policy distillation that can solve all tasks and automatically infer which one to run, and tests its robustness by transferring the final policy into a real life setting. Expand
Gradient Episodic Memory for Continual Learning
TLDR
A model for continual learning, called Gradient Episodic Memory (GEM) is proposed that alleviates forgetting, while allowing beneficial transfer of knowledge to previous tasks. Expand
Transfer in Deep Reinforcement Learning Using Successor Features and Generalised Policy Improvement
TLDR
This paper shows that the transfer promoted by SFs & GPI leads to very good policies on unseen tasks almost instantaneously, and describes how to learn policies specialised to the new tasks in a way that allows them to be added to the agent's set of skills, and thus be reused in the future. Expand
...
1
2
3
4
5
...