# Learning Continuous Chaotic Attractors with a Reservoir Computer

@article{Smith2022LearningCC, title={Learning Continuous Chaotic Attractors with a Reservoir Computer}, author={Lindsay M. Smith and Jason Z. Kim and Zhixin Lu and Danielle S. Bassett}, journal={ArXiv}, year={2022}, volume={abs/2110.08631} }

Lindsay M. Smith,1 Jason Z. Kim,2 Zhixin Lu,2 and Dani S. Bassett1, 2, 3, 4, 5, 6 1)Department of Physics & Astronomy, University of Pennsylvania, Philadelphia, PA 19104 USA 2)Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104 USA 3)Department of Electrical & Systems Engineering, University of Pennsylvania, Philadelphia, PA 19104 USA 4)Department of Psychiatry, University of Pennsylvania, Philadelphia, PA 19104 USA 5)Department of Neurology, University of…

## References

SHOWING 1-10 OF 61 REFERENCES

Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data.

- Computer Science, PhysicsChaos
- 2017

This work uses recent advances in the machine learning area known as "reservoir computing" to formulate a method for model-free estimation from data of the Lyapunov exponents of a chaotic process to form a modified autonomous reservoir.

Learning Continuous Attractors in Recurrent Networks

- Mathematics, Computer ScienceNIPS
- 1997

If an object has a continuous family of instantiations, it should be represented by a continuous attractor, and this idea is illustrated with a network that learns to complete patterns.

Delay learning and polychronization for reservoir computing

- Computer ScienceNeurocomputing
- 2008

A multi-timescale learning rule for spiking neuron networks, in the line of the recently emerging field of reservoir computing, emphasizes that polychronization can be used as a tool for exploiting the computational power of synaptic delays and for monitoring the topology and activity of a spiking neurons network.

Teaching recurrent neural networks to infer global temporal structure from local examples

- Computer ScienceNat. Mach. Intell.
- 2021

It is demonstrated that a recurrent neural network (RNN) can learn to modify its representation of complex information using only examples, and the associated learning mechanism is explained with new theory.

Synchronization of chaotic systems.

- Computer Science, MedicineChaos
- 2015

The historical timeline of this topic back to the earliest known paper is established and it is shown that building synchronizing systems leads naturally to engineering more complex systems whose constituents are chaotic, but which can be tuned to output various chaotic signals.

Reservoir computing approaches to recurrent neural network training

- Computer ScienceComput. Sci. Rev.
- 2009

This review systematically surveys both current ways of generating/adapting the reservoirs and training different types of readouts, and offers a natural conceptual classification of the techniques, which transcends boundaries of the current ''brand-names'' of reservoir methods.

Learning a Continuous Attractor Neural Network from Real Images

- Computer ScienceICONIP
- 2017

This study proposes a biological plausible scheme for the neural system to learn a CANN from real images, and adopts a modified Hebb rule, which encodes the correlation between neural representations into the connection form of the network.

Neural networks and physical systems with emergent collective computational abilities.

- Computer Science, MedicineProceedings of the National Academy of Sciences of the United States of America
- 1982

A model of a system having a large number of simple equivalent components, based on aspects of neurobiology but readily adapted to integrated circuits, produces a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size.

An approach to reservoir computing design and training

- Computer ScienceExpert Syst. Appl.
- 2013

RCDESIGN combines an evolutionary algorithm with reservoir computing and simultaneously looks for the best values of parameters, topology and weight matrices without rescaling the reservoir matrix by the spectral radius.

Invertible generalized synchronization: A putative mechanism for implicit learning in neural systems.

- Medicine, Computer ScienceChaos
- 2020

A general and biologically feasible learning framework that utilizes invertible generalized synchronization (IGS), which supports the notion that biological neural networks can learn the dynamic nature of their environment through the mechanism of IGS.