Dimension reduction in recurrent networks by canonicalization

@article{Grigoryeva2021DimensionRI,
  title={Dimension reduction in recurrent networks by canonicalization},
  author={Lyudmila Grigoryeva and Juan-Pablo Ortega},
  journal={ArXiv},
  year={2021},
  volume={abs/2007.12141}
}
Many recurrent neural network machine learning paradigms can be formulated using state-space representations. The classical notion of canonical state-space realization is adapted in this paper to accommodate semi-infinite inputs so that it can be used as a dimension reduction tool in the recurrent networks setup. The so-called input forgetting property is identified as the key hypothesis that guarantees the existence and uniqueness (up to system isomorphisms) of canonical realizations for… 
Interpretable Design of Reservoir Computing Networks using Realization Theory
TLDR
An algorithm to design RCNs using the realization theory of linear dynamical systems is developed and the notion of α-stable realization is introduced and an efficient approach to prune the size of a linear RCN without deteriorating the training accuracy is provided.
Learning strange attractors with reservoir systems
This paper shows that the celebrated Embedding Theorem of Takens is a particular case of a much more general statement according to which, randomly generated linear state-space representations of

References

SHOWING 1-10 OF 119 REFERENCES
Functional Analysis
A vector space over a field K (R or C) is a set X with operations vector addition and scalar multiplication satisfy properties in section 3.1. [1] An inner product space is a vector space X with
Memory and forecasting capacities of nonlinear recurrent networks
Chaos on compact manifolds: Differentiable synchronizations beyond the Takens theorem.
This paper shows that a large class of fading memory state-space systems driven by discrete-time observations of dynamical systems defined on compact manifolds always yields continuously
Discrete-time signatures and randomness in reservoir computing
TLDR
A new explanation of the geometric nature of the reservoir computing (RC) phenomenon is presented and a reservoir system able to approximate any element in the fading memory filters class just by training a different linear readout for each different filter.
Learning strange attractors with reservoir systems
This paper shows that the celebrated Embedding Theorem of Takens is a particular case of a much more general statement according to which, randomly generated linear state-space representations of
Approximation error estimates for random neural networks and reservoir systems
  • arXiv preprint 2002.05933
  • 2020
Embedding and approximation theorems for echo state networks
Reservoir Computing Universality With Stochastic Inputs
  • L. Gonon, J. Ortega
  • Computer Science
    IEEE Transactions on Neural Networks and Learning Systems
  • 2020
TLDR
It is proven that linear reservoir systems with either polynomial or neural network readout maps are universal and that the same property holds for two families with linear readouts, namely, trigonometric state-affine systems and echo state networks.
...
1
2
3
4
5
...