Continual Learning with Echo State Networks
@article{Cossu2021ContinualLW, title={Continual Learning with Echo State Networks}, author={Andrea Cossu and Davide Bacciu and Antonio Carta and Claudio Gallicchio and Vincenzo Lomonaco}, journal={ArXiv}, year={2021}, volume={abs/2105.07674} }
Continual Learning (CL) refers to a learning setup where data is non stationary and the model has to learn without forgetting existing knowledge. The study of CL for sequential patterns revolves around trained recurrent networks. In this work, instead, we introduce CL in the context of Echo State Networks (ESNs), where the recurrent component is kept fixed. We provide the first evaluation of catastrophic forgetting in ESNs and we highlight the benefits in using CL strategies which are not…
5 Citations
Continual Learning of Dynamical Systems with Competitive Federated Reservoir Computing
- Computer ScienceArXiv
- 2022
This work proposes an approach to continual learning based on reservoir computing, a state-of-the-art method for training recurrent neural networks on complex spatiotemporal dynamical systems, and proposes to train multiple competitive prediction heads concurrently, inspired by neuroscience’s predictive coding.
Efficient Fake News Detection using Bagging Ensembles of Bidirectional Echo State Networks
- Computer Science2022 International Joint Conference on Neural Networks (IJCNN)
- 2022
Experiments reveal that competitive detection statistics are obtained by the proposed approach when compared to shallow learning and avant-garde Deep Learning models, but at a dramatically less computational complexity in their training phase.
Continual Learning for Human State Monitoring
- Computer ScienceESANN 2022 proceedings
- 2022
Results show that, possibly due to the domain-incremental properties of the benchmarks, forgetting can be easily tackled even with a simple netuning and that existing strategies struggle in accumulating knowledge over a held-out test subject.
Continual Sequence Modeling With Predictive Coding
- Computer ScienceFrontiers in Neurorobotics
- 2022
This study focuses on learning algorithms with low memory requirements, that do not need to store past information to update their parameters, and suggests combining these two mechanisms into a new proposed model that is label PC-Conceptors that outperforms the other methods presented in this study.
TEACHING - Trustworthy autonomous cyber-physical applications through human-centred intelligence
- Computer Science2021 IEEE International Conference on Omni-Layer Intelligent Systems (COINS)
- 2021
The paper discusses the main concepts of the TEACHING approach and singles out the main AI-related research challenges associated with it and provides a discussion of the design choices for the TEacherING system to tackle the aforementioned challenges.
References
SHOWING 1-10 OF 17 REFERENCES
Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
- 2020
By combining streaming linear discriminant analysis with deep learning, this work is able to outperform both incremental batch learning and streaming learning algorithms on both Ima- geNet ILSVRC-2012 and CORe50, a dataset that involves learning to classify from temporally ordered samples.
Continual Learning for Recurrent Neural Networks: an Empirical Evaluation
- Computer ScienceNeural Networks
- 2021
Overcoming catastrophic forgetting
- 2017
Avalanche: an End-to-End Library for Continual Learning
- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
- 2021
The proposed Avalanche, an open-source end-to-end library for continual learning research based on PyTorch is designed to provide a shared and collaborative codebase for fast prototyping, training, and reproducible evaluation of continual learning algorithms.
Organizing recurrent network dynamics by task-computation to enable continual learning
- Computer ScienceNeurIPS
- 2020
A novel learning rule is developed designed to minimize interference between sequentially learned tasks in recurrent networks and it is shown that networks trained using this approach can reuse similar dynamical structures across similar tasks.
GDumb: A Simple Approach that Questions Our Progress in Continual Learning
- Computer ScienceECCV
- 2020
We discuss a general formulation for the Continual Learning (CL) problem for classification—a learning task where a stream provides samples to a learner and the goal of the learner, depending on the…
Continual Learning with Gated Incremental Memories for sequential data processing
- Computer Science2020 International Joint Conference on Neural Networks (IJCNN)
- 2020
This work proposes a Recurrent Neural Network (RNN) model for CL that is able to deal with concept drift in input distribution without forgetting previously acquired knowledge and implements and test a popular CL approach, Elastic Weight Consolidation (EWC), on top of two different types of RNNs.
Continual Learning Exploiting Structure of Fractal Reservoir Computing
- Computer ScienceICANN
- 2019
This work proposes the way to design reservoir computing such that the firing neurons are clearly distinguished from others according to the task to be performed, and employs fractal network, which has modularity and scalability, to be reservoir layer.
Spiking Neural Predictive Coding for Continual Learning from Data Streams
- Computer ScienceArXiv
- 2019
The proposed Spiking Neural Coding Network is competitive in terms of classification performance, can conduct online semi-supervised learning, naturally experiences less forgetting when learning from a sequence of tasks, and is more computationally economical and biologically-plausible than popular artificial neural networks.
Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges
- Computer ScienceInf. Fusion
- 2020