Learn to Synchronize, Synchronize to Learn

@article{Verzelli2021LearnTS,
  title={Learn to Synchronize, Synchronize to Learn},
  author={Pietro Verzelli and Cesare Alippi and Lorenzo Francesco Livi},
  journal={Chaos},
  year={2021},
  volume={31 8},
  pages={
          083119
        }
}
In recent years, the artificial intelligence community has seen a continuous interest in research aimed at investigating dynamical aspects of both training procedures and machine learning models. Of particular interest among recurrent neural networks, we have the Reservoir Computing (RC) paradigm characterized by conceptual simplicity and a fast training scheme. Yet, the guiding principles under which RC operates are only partially understood. In this work, we analyze the role played by… 
6 Citations

Figures from this paper

Learning strange attractors with reservoir systems
This paper shows that the celebrated Embedding Theorem of Takens is a particular case of a much more general statement according to which, randomly generated linear state-space representations of
Criticality in reservoir computer of coupled phase oscillators
TLDR
An artificial neural network of coupled phase oscillators is designed and, by the technique of reservoir computing in machine learning, train it for predicting chaos and it is found that when the machine is properly trained, oscillators in the reservoir are synchronized into clusters whose sizes follow a power-law distribution.
Euler State Networks
TLDR
Experiments on synthetic tasks indicate the marked superiority of the proposed approach, compared to standard RC models, in tasks requiring longterm memorization skills, and results on real-world time series classification benchmarks point out that EuSN is capable of matching (or even surpassing) the level of accuracy of trainable Recurrent Neural Networks.
Reservoir time series analysis: Using the response of complex dynamical systems as a universal indicator of change.
TLDR
This work presents the idea of reservoir time series analysis (RTSA), a method by which the state space representation generated by a reservoir computing model can be used for time seriesAnalysis and shows significant, generalized accuracy across the proposed RTSA features that surpasses the benchmark methods.
Chaos on compact manifolds: Differentiable synchronizations beyond the Takens theorem.
This paper shows that a large class of fading memory state-space systems driven by discrete-time observations of dynamical systems defined on compact manifolds always yields continuously

References

SHOWING 1-10 OF 63 REFERENCES
A Geometrical Analysis of Global Stability in Trained Feedback Networks
TLDR
This work derives an approximate analytical description of global dynamics in trained networks, which assumes uncorrelated connectivity weights in the feedback and in the random bulk, and derives a simplified description that captures the local and the global stability properties of the target solution, and thus predicts training performance.
Input representation in recurrent neural networks dynamics
TLDR
A novel analysis of the dynamics of recurrent neural networks is proposed, which allows one to express the state evolution using the controllability matrix and it is possible to compare different architectures and explain why a cyclic topology achieves favourable results.
Invertible generalized synchronization: A putative mechanism for implicit learning in neural systems.
TLDR
A general and biologically feasible learning framework that utilizes invertible generalized synchronization (IGS), which supports the notion that biological neural networks can learn the dynamic nature of their environment through the mechanism of IGS.
Re-visiting the echo state property
An experimental unification of reservoir computing methods
Memory versus non-linearity in reservoirs
TLDR
A novel metric is introduced which measures the deviation of the reservoir from a linear regime and use it to define different regions of dynamical behaviour and the relationship of two important reservoir parameters, input scaling and spectral radius, on two properties of an artificial task, namely memory and non-linearity.
Training Echo State Networks with Regularization Through Dimensionality Reduction
TLDR
A new framework to train a class of recurrent neural network, called Echo State Network, to predict real valued time-series and to provide a visualization of the modeled system dynamics, providing evidence that the lower dimensional embedding retains the dynamical properties of the underlying system better than the full-dimensional internal states of the network.
Embedding and approximation theorems for echo state networks
Echo State Property Linked to an Input: Exploring a Fundamental Characteristic of Recurrent Neural Networks
TLDR
A new definition of the echo state property is presented that directly connects it to temporal or statistical properties of the driving input and derives a fundamental 0-1 law: if the input comes from an ergodic source, the network response has the echoState property with probability one or zero, independent of the given network.
Local Dynamics in Trained Recurrent Neural Networks.
TLDR
A mean field theory for reservoir computing networks trained to have multiple fixed point attractors is developed, showing that the dynamics of the network's output in the vicinity of attractor is governed by a low-order linear ordinary differential equation.
...
1
2
3
4
5
...