Continual Learning with Echo State Networks

@article{Cossu2021ContinualLW,
  title={Continual Learning with Echo State Networks},
  author={Andrea Cossu and Davide Bacciu and Antonio Carta and Claudio Gallicchio and Vincenzo Lomonaco},
  journal={ArXiv},
  year={2021},
  volume={abs/2105.07674}
}
Continual Learning (CL) refers to a learning setup where data is non stationary and the model has to learn without forgetting existing knowledge. The study of CL for sequential patterns revolves around trained recurrent networks. In this work, instead, we introduce CL in the context of Echo State Networks (ESNs), where the recurrent component is kept fixed. We provide the first evaluation of catastrophic forgetting in ESNs and we highlight the benefits in using CL strategies which are not… 

Figures and Tables from this paper

Continual Learning of Dynamical Systems with Competitive Federated Reservoir Computing

This work proposes an approach to continual learning based on reservoir computing, a state-of-the-art method for training recurrent neural networks on complex spatiotemporal dynamical systems, and proposes to train multiple competitive prediction heads concurrently, inspired by neuroscience’s predictive coding.

Efficient Fake News Detection using Bagging Ensembles of Bidirectional Echo State Networks

Experiments reveal that competitive detection statistics are obtained by the proposed approach when compared to shallow learning and avant-garde Deep Learning models, but at a dramatically less computational complexity in their training phase.

Continual Learning for Human State Monitoring

Results show that, possibly due to the domain-incremental properties of the benchmarks, forgetting can be easily tackled even with a simple netuning and that existing strategies struggle in accumulating knowledge over a held-out test subject.

Continual Sequence Modeling With Predictive Coding

This study focuses on learning algorithms with low memory requirements, that do not need to store past information to update their parameters, and suggests combining these two mechanisms into a new proposed model that is label PC-Conceptors that outperforms the other methods presented in this study.

TEACHING - Trustworthy autonomous cyber-physical applications through human-centred intelligence

The paper discusses the main concepts of the TEACHING approach and singles out the main AI-related research challenges associated with it and provides a discussion of the design choices for the TEacherING system to tackle the aforementioned challenges.

References

SHOWING 1-10 OF 17 REFERENCES

Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis

By combining streaming linear discriminant analysis with deep learning, this work is able to outperform both incremental batch learning and streaming learning algorithms on both Ima- geNet ILSVRC-2012 and CORe50, a dataset that involves learning to classify from temporally ordered samples.

Continual Learning for Recurrent Neural Networks: an Empirical Evaluation

Overcoming catastrophic forgetting

  • 2017

Avalanche: an End-to-End Library for Continual Learning

The proposed Avalanche, an open-source end-to-end library for continual learning research based on PyTorch is designed to provide a shared and collaborative codebase for fast prototyping, training, and reproducible evaluation of continual learning algorithms.

Organizing recurrent network dynamics by task-computation to enable continual learning

A novel learning rule is developed designed to minimize interference between sequentially learned tasks in recurrent networks and it is shown that networks trained using this approach can reuse similar dynamical structures across similar tasks.

GDumb: A Simple Approach that Questions Our Progress in Continual Learning

We discuss a general formulation for the Continual Learning (CL) problem for classification—a learning task where a stream provides samples to a learner and the goal of the learner, depending on the

Continual Learning with Gated Incremental Memories for sequential data processing

This work proposes a Recurrent Neural Network (RNN) model for CL that is able to deal with concept drift in input distribution without forgetting previously acquired knowledge and implements and test a popular CL approach, Elastic Weight Consolidation (EWC), on top of two different types of RNNs.

Continual Learning Exploiting Structure of Fractal Reservoir Computing

This work proposes the way to design reservoir computing such that the firing neurons are clearly distinguished from others according to the task to be performed, and employs fractal network, which has modularity and scalability, to be reservoir layer.

Spiking Neural Predictive Coding for Continual Learning from Data Streams

The proposed Spiking Neural Coding Network is competitive in terms of classification performance, can conduct online semi-supervised learning, naturally experiences less forgetting when learning from a sequence of tasks, and is more computationally economical and biologically-plausible than popular artificial neural networks.