# Reservoir computing approaches to recurrent neural network training

@article{Lukoeviius2009ReservoirCA, title={Reservoir computing approaches to recurrent neural network training}, author={Mantas Luko{\vs}evi{\vc}ius and Herbert Jaeger}, journal={Comput. Sci. Rev.}, year={2009}, volume={3}, pages={127-149} }

## 1,729 Citations

Reservoir Computing and Self-Organized Neural Hierarchies

- Computer Science
- 2012

This thesis overviews existing and investigates new alternatives to the classical supervised training of RNNs and their hierarchies and proposes and investigates the use of two different neural network models for the reservoirs together with several unsupervised adaptation techniques, as well as un supervisedly layer-wise trained deep hierarchies of such models.

Recent Advances in Physical Reservoir Computing: A Review

- Computer ScienceNeural Networks
- 2019

Regular echo state networks: simple and accurate reservoir models to real-world applications

- Computer ScienceSAC
- 2021

The results revealed that some problems can be considerably benefited from some level of organization in the reservoir, such as those provided by regular or small-world network models; and that the non-linear support vector machine classifier achieved the best predictive performance, although it was statistically comparable with the k-nearest neighbors one, which has much smaller time complexity.

Evolving reservoir weights in the frequency domain

- Computer ScienceGECCO Companion
- 2021

This work introduces an evolutionary method for adjusting the reservoir non-null weights, called EvoESN (Evolutionary ESN), which combines an evolutionary search in the Fourier space with supervised learning for the readout weights.

Simple Deterministically Constructed Recurrent Neural Networks

- Computer ScienceIDEAL
- 2010

It is shown that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network on a number of time series benchmarks, and argued that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.

Reservoir Computing Trends

- Computer ScienceKI - Künstliche Intelligenz
- 2012

A brief introduction into basic concepts, methods, insights, current developments, and some applications of RC are given.

Multilayered Echo State Machine: A Novel Architecture and Algorithm

- Computer ScienceIEEE Transactions on Cybernetics
- 2017

The addition of multiple layers of reservoirs are shown to provide a more robust alternative to conventional RC networks, and the comparative merits of this approach are demonstrated in a number of applications.

Evolutionary strategy for simultaneous optimization of parameters, topology and reservoir weights in Echo State Networks

- Computer ScienceThe 2010 International Joint Conference on Neural Networks (IJCNN)
- 2010

This paper presents an original investigation of an evolutionary method for simultaneous optimization of parameters, topology and reservoir weights in Echo State Networks.

An approach to reservoir computing design and training

- Computer ScienceExpert Syst. Appl.
- 2013

Recurrent Kernel Machines: Computing with Infinite Echo State Networks

- Computer ScienceNeural Computation
- 2012

The concept of ESNs is extended to infinite-sized recurrent neural networks, which can be considered recursive kernels that subsequently can be used to create recursive support vector machines.

## References

SHOWING 1-10 OF 218 REFERENCES

Overview of Reservoir Recipes

- Computer Science, Geology
- 2007

This report motivates the new definition of the paradigm and surveys the reservoir generation/adaptation techniques, offering a natural conceptual classification which transcends boundaries of the current "brand-names" of reservoir methods.

Echo State Networks with Trained Feedbacks

- Computer Science
- 2007

This report explores possible directions in which the theoretical findings could be applied to increase the computational power of Echo State Networks and proposes a modification of ESNs called Layered ESNs.

An overview of reservoir computing: theory, applications and implementations

- Computer ScienceESANN
- 2007

This tutorial will give an overview of current research on theory, applica- tion and implementations of Reservoir Computing, which makes it possible to solve complex tasks using just linear post-processing techniques.

Echo state networks with filter neurons and a delay&sum readout

- Computer ScienceNeural Networks
- 2010

Pruning and Regularisation in Reservoir Computing: a First Insight

- Computer ScienceESANN
- 2008

This work proposes to study how pruning some connections from the reservoir to the readout can help to increase the generalisation ability, in much the same way as regularisation techniques do, and to improve the implementability of reservoirs in hardware.

Feed-forward echo state networks

- Computer ScienceProceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.
- 2005

This work proposes modified ESN architecture, where the only "true" recurrent connections are backward connection from output to recurrent units and the reservoir is built only by "forwardly" connected recurrent units.

Training Recurrent Networks by Evolino

- Computer ScienceNeural Computation
- 2007

It is shown that Evolino-based LSTM can solve tasks that Echo State nets cannot and achieves higher accuracy in certain continuous function generation tasks than conventional gradient descent RNNs, including gradient-basedLSTM.

Echo State Networks and Self-Prediction

- Computer ScienceBioADIT
- 2004

Preliminary results indicate that self prediction may improve performance of an ESN when performing signal mappings in the presence of additive noise.

Improving the Prediction Accuracy of Echo State Neural Networks by Anti-Oja's Learning

- Computer ScienceICANN
- 2007

This regular adaptation of Echo State neural networks was optimized by updating the weights of the dynamic reservoir with Anti-Oja's learning, which resulted in prediction error being substantially smaller in comparison with prediction error achieved by a standard algorithm.