Avalanche: an End-to-End Library for Continual Learning

  title={Avalanche: an End-to-End Library for Continual Learning},
  author={Vincenzo Lomonaco and Lorenzo Pellegrini and Andrea Cossu and Antonio Carta and Gabriele Graffieti and Tyler L. Hayes and Matthias De Lange and Marc Masana and Jary Pomponi and Gido M. van de Ven and Martin Mundt and Qi She and Keiland W Cooper and Jeremy Forest and Eden Belouadah and Simone Calderara and German Ignacio Parisi and Fabio Cuzzolin and Andreas Savas Tolias and Simone Scardapane and Luca Antiga and Subutai Amhad and Adrian Daniel Popescu and Christopher Kanan and Joost van de Weijer and Tinne Tuytelaars and Davide Bacciu and Davide Maltoni},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning. Recently, we have witnessed a renewed and fast-growing interest in continual learning, especially within the deep learning community. However, algorithmic solutions are often difficult to re-implement, evaluate and port across different settings, where even results on standard benchmarks are hard to reproduce. In this work, we propose Avalanche, an open-source end-to-end… 

Figures and Tables from this paper

Sample Condensation in Online Continual Learning

This paper proposes OLCGM, a novel replay-based continual learning strategy that uses knowledge condensation techniques to continuously compress the memory and achieve a better use of its limited size.

Ex-Model: Continual Learning from a Stream of Trained Models

This paper introduces and formalizes a new paradigm named "Ex-Model Continual Learning" (ExML), where an agent learns from a sequence of previously trained models in-stead of raw data, and contributes with three ex-model continual learning algorithms and an empirical setting comprising three datasets.

CLIP model is an Efficient Continual Learner

This work shows that a frozen CLIP (Contrastive Language-Image Pretraining) model offers as-tounding continual learning performance without any zero-shot eval-uation, and advocates the use of this strong yet embarrass-ingly simple baseline for future comparisons in the continual learning tasks.

CL-Gym: Full-Featured PyTorch Library for Continual Learning

CL-Gym is introduced, a full-featured continual learning library that overcomes this challenge and accelerates the research and development of state-of-the-art CL algorithms.

A Procedural World Generation Framework for Systematic Evaluation of Continual Learning

A modular parametric generative model with adaptable generative factors can be used to flexibly compose data streams, which significantly facilitates a detailed analysis and allows for effortless investigation of various continual learning schemes.

ModelCI-e: Enabling Continual Learning in Deep Learning Serving Systems

Preliminary results demonstrate the usability of ModelCI-e, and indicate that eliminating the interference between model updating and inference workloads is crucial for higher system efficiency.

Avalanche RL: a Continual Reinforcement Learning Library

Avalanche RL is described, a library for Continual Reinforcement Learning which allows users to easily train agents on a continuous stream of tasks and proposes Continual Habitat-Lab, a novel benchmark and a high-level library which enables the usage of the photorealistic simulator Habit at-Sim for CRL research.

Sparsity and Heterogeneous Dropout for Continual Learning in the Null Space of Neural Activations

This paper proposes two biologically-inspired mechanisms based on sparsity and heterogeneous dropout that significantly increase a continual learner’s performance over a long sequence of tasks.

Continual evaluation for lifelong learning: Identifying the stability gap

A framework for continual evaluation is proposed that establishes per-iteration evaluation and enables identifying the worst case performance of the learner over its lifetime, and empirically identifies that replay suffers from a stability gap.



Continual Learning for Recurrent Neural Networks: a Review and Empirical Evaluation

This paper organizes the literature on CL for sequential data processing by providing a categorization of the contributions and a review of the benchmarks, and proposes two new benchmarks for CL with sequential data based on existing datasets, whose characteristics resemble real-world applications.

Task-Free Continual Learning

This work investigates how to transform continual learning to an online setup, and develops a system that keeps on learning over time in a streaming fashion, with data distributions gradually changing and without the notion of separate tasks.

Progress & Compress: A scalable framework for continual learning

The progress & compress approach is demonstrated on sequential classification of handwritten alphabets as well as two reinforcement learning domains: Atari games and 3D maze navigation.

Latent Replay for Real-Time Continual Learning

This paper introduces an original technique named Latent Replay where, instead of storing a portion of past data in the input space, it is proposed to store activations volumes at some intermediate layer, which can significantly reduce the computation and storage required by native rehearsal.

Continuum: Simple Management of Complex Continual Learning Scenarios

This work proposes a simple and efficient framework with numerous data loaders that avoid researcher to spend time on designing data loader and eliminate time-consuming errors, and is easily extendable to add novel settings for specific needs.

GDumb: A Simple Approach that Questions Our Progress in Continual Learning

We discuss a general formulation for the Continual Learning (CL) problem for classification—a learning task where a stream provides samples to a learner and the goal of the learner, depending on the

Online Continual Learning on Sequences

This chapter summarizes and discusses recent deep learning models that address OCL on sequential input through the use (and combination) of synaptic regularization, structural plasticity, and experience replay.

Efficient Lifelong Learning with A-GEM

An improved version of GEM is proposed, dubbed Averaged GEM (A-GEM), which enjoys the same or even better performance as GEM, while being almost as computationally and memory efficient as EWC and other regularization-based methods.