• Corpus ID: 149642842

Overview of Reservoir Recipes

@inproceedings{Lukoeviius2007OverviewOR,
  title={Overview of Reservoir Recipes},
  author={Mantas Luko{\vs}evi{\vc}ius and Herbert Jaeger},
  year={2007}
}
Echo State Networks (ESNs) and Liquid State Machines (LSMs) introduced a simple new paradigm in artificial recurrent neural network (RNN) training, where an RNN (the reservoir) is generated randomly and only a readout is trained. The paradigm, becoming known as reservoir computing, made RNNs accessible for practical applications as never before and outperformed classical fully trained RNNs in many tasks. The latter, however, does not imply that random reservoirs are optimal, but rather that… 
Reservoir computing approaches to recurrent neural network training
Reservoir Computing and Self-Organized Neural Hierarchies
TLDR
This thesis overviews existing and investigates new alternatives to the classical supervised training of RNNs and their hierarchies and proposes and investigates the use of two different neural network models for the reservoirs together with several unsupervised adaptation techniques, as well as un supervisedly layer-wise trained deep hierarchies of such models.
Minimum Complexity Echo State Network
TLDR
It is shown that a simple deterministically constructed cycle reservoir is comparable to the standard echo state network methodology and the (short-term) of linear cyclic reservoirs can be made arbitrarily close to the proved optimal value.
Tailoring Artificial Neural Networks for Optimal Learning
TLDR
Through spectral analysis of the reservoir network, this work reveals a key factor that largely determines the ESN memory capacity and hence affects its performance and finds that adding short loops to the reservoirnetwork can tailor ESN for specific tasks and optimal learning.
Echo state networks with filter neurons and a delay&sum readout
Architectural designs of Echo State Network
TLDR
This thesis proposes two very simple deterministic ESN organisation (Simple Cycle reservoir (SCR) and Cycle Reservoir with Jumps) and designs and utilises an ensemble of ESNs with diverse reservoirs whose collective readout is obtained through Negative Correlation Learning (NCL) of ensemble of Multi-Layer Perceptrons (MLP), where each individual MPL realises the readout from a single ESN.
Reservoir Computing on the Hypersphere
TLDR
This work removes the nonlinear neural activation function, and considers an orthogonal reservoir acting on normalized states on the unit hypersphere, which shows that the system’s memory capacity exceeds the dimensionality of the reservoir.
Pruning and Regularisation in Reservoir Computing: a First Insight
TLDR
This work proposes to study how pruning some connections from the reservoir to the readout can help to increase the generalisation ability, in much the same way as regularisation techniques do, and to improve the implementability of reservoirs in hardware.
Performance optimization of echo state networks through principal neuron reinforcement
TLDR
A neuroplasticity-inspired algorithm was proposed in this study to alter the strength of internal synapses within the reservoir towards the goal of optimizing the neuronal dynamics of the ESN pertaining to the specific problem to be solved.
...
...