Information processing in echo state networks at the edge of chaos

  title={Information processing in echo state networks at the edge of chaos},
  author={Joschka Boedecker and Oliver Obst and Joseph T. Lizier and Norbert Michael Mayer and Minoru Asada},
  journal={Theory in Biosciences},
We investigate information processing in randomly connected recurrent neural networks. It has been shown previously that the computational capabilities of these networks are maximized when the recurrent layer is close to the border between a stable and an unstable dynamics regime, the so called edge of chaos. The reasons, however, for this maximized performance are not completely understood. We adopt an information-theoretical framework and are for the first time able to quantify the… 

Memory Capacity of Input-Driven Echo State Networks at the Edge of Chaos

This paper takes a closer look at short-term memory capacity, introduced by Jaeger in case of echo state networks, and investigates the effect of reservoir sparsity in this context.

Neuroevolution on the edge of chaos

It is demonstrated that echo state networks with local connections combine the best of both worlds, the simplicity of random echo state Networks and the performance of evolved networks, and it is shown that evolution tends to stay close to the ordered side of the edge of chaos.

Optimal Input Representation in Neural Systems at the Edge of Chaos

It is concluded that operating near criticality can also have —besides the usually alleged virtues— the advantage of allowing for flexible, robust and efficient input representations.

Evaluation of Information-Theoretic Measures in Echo State Networks on the Edge of Stability

  • M. TordaI. Farkaš
  • Computer Science
    2018 International Joint Conference on Neural Networks (IJCNN)
  • 2018
This paper takes a closer look at information-theoretic measures of echo state networks, using Kraskov-Grassberger-Stögbauer estimator with optimized parameters, and investigates the effect of reservoir orthogonalization, that has been shown earlier to maximize memory capacity, on the prediction accuracy and the above mentioned measures.

Dynamics and Information Import in Recurrent Neural Networks

A completely new type of resonance phenomenon is found, which is called “Import Resonance” (IR), where the information import shows a maximum, i.e., a peak-like dependence on the coupling strength between the RNN and its external input.

Optimal short-term memory before the edge of chaos in driven random recurrent networks

The ability of discrete-time nonlinear recurrent neural networks to store time-varying small input signals is investigated with mean-field theory and it is shown that the network contribution of these short-term memory measures peaks before the edge of chaos.

Optimal Sequence Memory in Driven Random Networks

This work investigates the effect of a time-varying input on the onset of chaos and the resulting consequences for information processing, finding an exact condition that determines the transition from stable to chaotic dynamics and the sequential memory capacity in closed form.

Recurrence-based information processing in gene regulatory networks.

The computational capabilities of the transcriptional regulatory networks of five evolutionary distant organisms are studied, suggesting that recurrent nonlinear dynamics is a key element for the processing of complex time-dependent information by cells.

Investigating echo state networks dynamics by means of recurrence analysis

This paper analyzes time series of neuron activations with recurrence plots (RPs) and recurrence quantification analysis (RQA), which permit to visualize and characterize high-dimensional dynamical systems.

Determination of the Edge of Criticality in Echo State Networks Through Fisher Information Maximization

This paper takes advantage of a recently developed nonparametric estimator of the Fisher information matrix and provides a method to determine the critical region of echo state networks (ESNs), a particular class of recurrent networks.



Real-Time Computation at the Edge of Chaos in Recurrent Neural Networks

It is shown that only near the critical boundary can recurrent networks of threshold gates perform complex computations on time series, which strongly supports conjectures that dynamical systems that are capable of doing complex computational tasks should operate near the edge of chaos.

Dynamical synapses causing self-organized criticality in neural networks

It is demonstrated analytically and numerically that by assuming (biologically more realistic) dynamical synapses in a spiking neural network, the neuronal avalanches turn from an exceptional phenomenon into a typical and robust self-organized critical behaviour, if the total resources of neurotransmitter are sufficiently large.

SORN: A Self-Organizing Recurrent Neural Network

This work introduces SORN, a self-organizing recurrent network that combines three distinct forms of local plasticity to learn spatio-temporal patterns in its input while maintaining its dynamics in a healthy regime suitable for learning.

Edge of chaos and prediction of computational performance for neural circuit models

Connectivity, Dynamics, and Memory in Reservoir Computing with Binary and Analog Neurons

Investigating the influence of the network connectivity (parameterized by the neuron in-degree) on a family of network models that interpolates between analog and binary networks reveals that the phase transition between ordered and chaotic network behavior of binary circuits qualitatively differs from the one in analog circuits, leading to decreased computational performance observed in binary circuits that are densely connected.

Initialization and self‐organized optimization of recurrent neural network connectivity

A general network initialization method using permutation matrices is studied and a new unsupervised learning rule based on intrinsic plasticity (IP) is derived to improve network performance in a self‐organized way.

Computation at the edge of chaos: Phase transitions and emergent computation

Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations

A new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks, based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry.

The Information Dynamics of Phase Transitions in Random Boolean Networks

This work uses a recently published framework to characterize the distributed computation in terms of its underlying information dynamics: information storage, information transfer and information modification, and finds maximizations in information storage and coherent information transfer on either side of the critical point.

Information flow in local cortical networks is not democratic

Surprisingly, the analysis revealed wide differences in the amount of information flowing into and out of different neurons in the network, indicating that information flow is not "democratically" distributed.