• Corpus ID: 198959423

Echo State Networks with Trained Feedbacks

@inproceedings{Lukoeviius2007EchoSN,
  title={Echo State Networks with Trained Feedbacks},
  author={Mantas Luko{\vs}evi{\vc}ius},
  year={2007}
}
Echo State Networks (ESNs) is an approach to the recurrent neural network (RNN) training, based on generating a big random network (reservoir) of sparsely inter-connected neurons and learning only a single layer of output weights from the reservoir as the target function. Despite many advantages of ESNs over gradient based RNN training techniques, they lack the power of learning some complex functions. New findings in dynamical systems theory state, that fixed neural circuits can obtain… 
Reservoir computing approaches to recurrent neural network training
Reservoir Computing and Self-Organized Neural Hierarchies
TLDR
This thesis overviews existing and investigates new alternatives to the classical supervised training of RNNs and their hierarchies and proposes and investigates the use of two different neural network models for the reservoirs together with several unsupervised adaptation techniques, as well as un supervisedly layer-wise trained deep hierarchies of such models.
Architectural designs of Echo State Network
TLDR
This thesis proposes two very simple deterministic ESN organisation (Simple Cycle reservoir (SCR) and Cycle Reservoir with Jumps) and designs and utilises an ensemble of ESNs with diverse reservoirs whose collective readout is obtained through Negative Correlation Learning (NCL) of ensemble of Multi-Layer Perceptrons (MLP), where each individual MPL realises the readout from a single ESN.
Overview of Reservoir Recipes A survey of new RNN training methods that follow the Reservoir paradigm
TLDR
The new definition of the paradigm is motivated and the reservoir generation/adaptation techniques are surveyed, offering a natural conceptual classification which transcends boundaries of the current " brand-names " of reservoir methods.
Adaptive Recursive Least Squares Algorithm based on Echo State Neural Network
TLDR
The simulation experiment results show that the proposed ESN-based filters can model nonlinear time-varying dynamical systems very well; the modeling performances are significantly better than those autoregressive moving average (ARMA) model based filters.
Dynamical Networks ( miniporject ) Effect of Topology of the Reservoir on Performance of Echo State Networks
TLDR
For memoryless and simple tasks it seems graphs with low average degree performed better for scalefree and small-world networks, while for complex tasks that need memory, networks with slightly higher averagedegree performed better.
A Practical Guide to Applying Echo State Networks
TLDR
Practical techniques and recommendations for successfully applying Echo State Network, as well as some more advanced application-specific modifications are presented.
Non-Markovian Processes Modeling with Echo State Networks
TLDR
Reservoir Computing is used in a logistic regression (LogR) framework and it is shown that RC can be used to estimate the transition probabilities at each time step and also to estimates the hidden variable.
...
...

References

SHOWING 1-10 OF 21 REFERENCES
Time Warping Invariant Echo State Networks
TLDR
This report presents a modification of ESNs - time warping invariant echo state networks (TWIESNs) that can effectively deal with time warped in dynamic pattern recognition.
Principles of real-time computing with feedback applied to cortical microcircuit models
TLDR
A computational theory is presented that characterizes the gain in computational power achieved through feedback in dynamical systems with fading memory and implies that many such systems acquire through feedback universal computational capabilities for analog computing with a non-fading memory.
Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations
TLDR
A new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks, based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry.
The Cascade-Correlation Learning Architecture
TLDR
The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.
Tutorial: Perspectives on Learning with RNNs
TLDR
An overview of current lines of research on learning with recurrent neural networks (RNNs) is presented, including understanding and understanding of algorithms, theoretical foundations, new efforts to circumvent gradient vanishing, new architectures, and fusion with other learning methods and dynam- ical systems theory.
Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication
We present a method for learning nonlinear systems, echo state networks (ESNs). ESNs employ artificial recurrent neural networks in a way that has recently been proposed independently as a learning
Accelerating the convergence of the back-propagation method
TLDR
Considering the selection of weights in neural nets as a problem in classical nonlinear optimization theory, the rationale for algorithms seeking only those weights that produce the globally minimum error is reviewed and rejected.
Context discerning multifunction networks: reformulating fixed weight neural networks
  • R. Santiago
  • Computer Science
    2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541)
  • 2004
TLDR
Using new insight, FWNNs are reformulated into a simpler structure, context discerning multifunction networks (CDMN), which poses an interesting model for contextual memory in neural systems.
Fixed-weight networks can learn
TLDR
It is concluded from the theorem that a system which exhibits learning behavior may exhibit no synaptic weight modifications, and it is demonstrated by transforming a backward error propagation network into a fixed-weight system.
Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights
  • D. Nguyen, B. Widrow
  • Computer Science
    1990 IJCNN International Joint Conference on Neural Networks
  • 1990
The authors describe how a two-layer neural network can approximate any nonlinear function by forming a union of piecewise linear segments. A method is given for picking initial weights for the
...
...