# Dimension reduction in recurrent networks by canonicalization

@article{Grigoryeva2021DimensionRI, title={Dimension reduction in recurrent networks by canonicalization}, author={Lyudmila Grigoryeva and Juan-Pablo Ortega}, journal={ArXiv}, year={2021}, volume={abs/2007.12141} }

Many recurrent neural network machine learning paradigms can be formulated using state-space representations. The classical notion of canonical state-space realization is adapted in this paper to accommodate semi-infinite inputs so that it can be used as a dimension reduction tool in the recurrent networks setup. The so-called input forgetting property is identified as the key hypothesis that guarantees the existence and uniqueness (up to system isomorphisms) of canonical realizations for…

## 2 Citations

Interpretable Design of Reservoir Computing Networks using Realization Theory

- Computer ScienceIEEE transactions on neural networks and learning systems
- 2022

An algorithm to design RCNs using the realization theory of linear dynamical systems is developed and the notion of α-stable realization is introduced and an efficient approach to prune the size of a linear RCN without deteriorating the training accuracy is provided.

Learning strange attractors with reservoir systems

- MathematicsArXiv
- 2021

This paper shows that the celebrated Embedding Theorem of Takens is a particular case of a much more general statement according to which, randomly generated linear state-space representations of…

## References

SHOWING 1-10 OF 119 REFERENCES

Functional Analysis

- Mathematics
- 2017

A vector space over a field K (R or C) is a set X with operations vector addition and scalar multiplication satisfy properties in section 3.1. [1] An inner product space is a vector space X with…

Memory and forecasting capacities of nonlinear recurrent networks

- Computer SciencePhysica D: Nonlinear Phenomena
- 2020

Chaos on compact manifolds: Differentiable synchronizations beyond the Takens theorem.

- MathematicsPhysical review. E
- 2021

This paper shows that a large class of fading memory state-space systems driven by discrete-time observations of dynamical systems defined on compact manifolds always yields continuously…

Discrete-time signatures and randomness in reservoir computing

- Computer ScienceIEEE transactions on neural networks and learning systems
- 2021

A new explanation of the geometric nature of the reservoir computing (RC) phenomenon is presented and a reservoir system able to approximate any element in the fading memory filters class just by training a different linear readout for each different filter.

Echo State Networks trained by Tikhonov least squares are L2(μ) approximators of ergodic dynamical systems

- MathematicsPhysica D: Nonlinear Phenomena
- 2021

Learning strange attractors with reservoir systems

- MathematicsArXiv
- 2021

This paper shows that the celebrated Embedding Theorem of Takens is a particular case of a much more general statement according to which, randomly generated linear state-space representations of…

Approximation error estimates for random neural networks and reservoir systems

- arXiv preprint 2002.05933
- 2020

Reservoir Computing Universality With Stochastic Inputs

- Computer ScienceIEEE Transactions on Neural Networks and Learning Systems
- 2020

It is proven that linear reservoir systems with either polynomial or neural network readout maps are universal and that the same property holds for two families with linear readouts, namely, trigonometric state-affine systems and echo state networks.