Avalanche: an End-to-End Library for Continual Learning

  title={Avalanche: an End-to-End Library for Continual Learning},
  author={Vincenzo Lomonaco and Lorenzo Pellegrini and Andrea Cossu and Antonio Carta and Gabriele Graffieti and Tyler L. Hayes and Matthias De Lange and Marc Masana and Jary Pomponi and Gido M. van de Ven and Martin Mundt and Qi She and Keiland W Cooper and Jeremy Forest and Eden Belouadah and Simone Calderara and German Ignacio Parisi and Fabio Cuzzolin and Andreas Savas Tolias and Simone Scardapane and Luca Antiga and Subutai Amhad and Adrian Daniel Popescu and Christopher Kanan and Joost van de Weijer and Tinne Tuytelaars and Davide Bacciu and Davide Maltoni},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning. Recently, we have witnessed a renewed and fast-growing interest in continual learning, especially within the deep learning community. However, algorithmic solutions are often difficult to re-implement, evaluate and port across different settings, where even results on standard benchmarks are hard to reproduce. In this work, we propose Avalanche, an open-source end-to-end… 

Figures and Tables from this paper

Sample Condensation in Online Continual Learning

OLCGM, a novel replay-based continual learning strategy that uses knowledge condensation techniques to continuously compress the memory and achieve a better use of its limited size, is proposed.

Ex-Model: Continual Learning from a Stream of Trained Models

This paper introduces and formalizes a new paradigm named "Ex-Model Continual Learning" (ExML), where an agent learns from a sequence of previously trained models in-stead of raw data, and contributes with three ex-model continual learning algorithms and an empirical setting comprising three datasets.

CL-Gym: Full-Featured PyTorch Library for Continual Learning

CL-Gym is introduced, a full-featured continual learning library that overcomes this challenge and accelerates the research and development of state-of-the-art CL algorithms.

A Procedural World Generation Framework for Systematic Evaluation of Continual Learning

A modular parametric generative model with adaptable generative factors can be used to flexibly compose data streams, which significantly facilitates a detailed analysis and allows for effortless investigation of various continual learning schemes.

ModelCI-e: Enabling Continual Learning in Deep Learning Serving Systems

Preliminary results demonstrate the usability of ModelCI-e, and indicate that eliminating the interference between model updating and inference workloads is crucial for higher system efficiency.

Avalanche RL: a Continual Reinforcement Learning Library

Avalanche RL is described, a library for Continual Reinforcement Learning which allows users to easily train agents on a continuous stream of tasks and proposes Continual Habitat-Lab, a novel benchmark and a high-level library which enables the usage of the photorealistic simulator Habit at-Sim for CRL research.

Sparsity and Heterogeneous Dropout for Continual Learning in the Null Space of Neural Activations

This paper proposes two biologically-inspired mechanisms based on sparsity and heterogeneous dropout that significantly increase a continual learner’s performance over a long sequence of tasks.

Continual evaluation for lifelong learning: Identifying the stability gap

A framework for continual evaluation is proposed that establishes per-iteration evaluation and enables identifying the worst case performance of the learner over its lifetime, and empirically identifies that replay suffers from a stability gap.

Beyond Supervised Continual Learning: a Review

Books that study CL in other settings, such as learning with reduced supervision, fully unsupervised learning, and reinforcement learning are reviewed, with a simple schema for classifying CL approaches w.r.t. their level of autonomy and supervision.

Practical Recommendations for Replay-based Continual Learning Methods

The aim of this work is to compare and analyze existing replay-based strategies and provide practical recommendations on developing efficient, effective and generally applicable replay- based strategies and about the impact of data augmentation, which allows reaching better performance with lower memory sizes.



Continual Learning for Recurrent Neural Networks: a Review and Empirical Evaluation

This paper organizes the literature on CL for sequential data processing by providing a categorization of the contributions and a review of the benchmarks, and proposes two new benchmarks for CL with sequential data based on existing datasets, whose characteristics resemble real-world applications.

Task-Free Continual Learning

This work investigates how to transform continual learning to an online setup, and develops a system that keeps on learning over time in a streaming fashion, with data distributions gradually changing and without the notion of separate tasks.

Progress & Compress: A scalable framework for continual learning

The progress & compress approach is demonstrated on sequential classification of handwritten alphabets as well as two reinforcement learning domains: Atari games and 3D maze navigation.

Latent Replay for Real-Time Continual Learning

This paper introduces an original technique named Latent Replay where, instead of storing a portion of past data in the input space, it is proposed to store activations volumes at some intermediate layer, which can significantly reduce the computation and storage required by native rehearsal.

GDumb: A Simple Approach that Questions Our Progress in Continual Learning

We discuss a general formulation for the Continual Learning (CL) problem for classification—a learning task where a stream provides samples to a learner and the goal of the learner, depending on the

Online Continual Learning on Sequences

This chapter summarizes and discusses recent deep learning models that address OCL on sequential input through the use (and combination) of synaptic regularization, structural plasticity, and experience replay.

Efficient Lifelong Learning with A-GEM

An improved version of GEM is proposed, dubbed Averaged GEM (A-GEM), which enjoys the same or even better performance as GEM, while being almost as computationally and memory efficient as EWC and other regularization-based methods.

Three scenarios for continual learning

Three continual learning scenarios are described based on whether at test time task identity is provided and--in case it is not--whether it must be inferred, and it is found that regularization-based approaches fail and that replaying representations of previous experiences seems required for solving this scenario.

OpenLORIS-Object: A Robotic Vision Dataset and Benchmark for Lifelong Deep Learning

A new lifelong robotic vision dataset ("OpenLORIS-Object") collected via RGB-D cameras is provided and the results demonstrate that the object recognition task in the ever-changing difficulty environments is far from being solved and the bottlenecks are at the forward/backward transfer designs.

The Computational Limits of Deep Learning

It is shown that progress in all five prominent application areas is strongly reliant on increases in computing power, and that progress along current lines is rapidly becoming economically, technically, and environmentally unsustainable.