Learning Continuous Chaotic Attractors with a Reservoir Computer

@article{Smith2022LearningCC,
  title={Learning Continuous Chaotic Attractors with a Reservoir Computer},
  author={Lindsay M. Smith and Jason Z. Kim and Zhixin Lu and Danielle S. Bassett},
  journal={ArXiv},
  year={2022},
  volume={abs/2110.08631}
}
Lindsay M. Smith,1 Jason Z. Kim,2 Zhixin Lu,2 and Dani S. Bassett1, 2, 3, 4, 5, 6 1)Department of Physics & Astronomy, University of Pennsylvania, Philadelphia, PA 19104 USA 2)Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104 USA 3)Department of Electrical & Systems Engineering, University of Pennsylvania, Philadelphia, PA 19104 USA 4)Department of Psychiatry, University of Pennsylvania, Philadelphia, PA 19104 USA 5)Department of Neurology, University of… 

Figures from this paper

References

SHOWING 1-10 OF 61 REFERENCES
Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data.
TLDR
This work uses recent advances in the machine learning area known as "reservoir computing" to formulate a method for model-free estimation from data of the Lyapunov exponents of a chaotic process to form a modified autonomous reservoir.
Learning Continuous Attractors in Recurrent Networks
  • H. Seung
  • Mathematics, Computer Science
    NIPS
  • 1997
TLDR
If an object has a continuous family of instantiations, it should be represented by a continuous attractor, and this idea is illustrated with a network that learns to complete patterns.
Delay learning and polychronization for reservoir computing
TLDR
A multi-timescale learning rule for spiking neuron networks, in the line of the recently emerging field of reservoir computing, emphasizes that polychronization can be used as a tool for exploiting the computational power of synaptic delays and for monitoring the topology and activity of a spiking neurons network.
Teaching recurrent neural networks to infer global temporal structure from local examples
TLDR
It is demonstrated that a recurrent neural network (RNN) can learn to modify its representation of complex information using only examples, and the associated learning mechanism is explained with new theory.
Synchronization of chaotic systems.
TLDR
The historical timeline of this topic back to the earliest known paper is established and it is shown that building synchronizing systems leads naturally to engineering more complex systems whose constituents are chaotic, but which can be tuned to output various chaotic signals.
Reservoir computing approaches to recurrent neural network training
TLDR
This review systematically surveys both current ways of generating/adapting the reservoirs and training different types of readouts, and offers a natural conceptual classification of the techniques, which transcends boundaries of the current ''brand-names'' of reservoir methods.
Learning a Continuous Attractor Neural Network from Real Images
TLDR
This study proposes a biological plausible scheme for the neural system to learn a CANN from real images, and adopts a modified Hebb rule, which encodes the correlation between neural representations into the connection form of the network.
Neural networks and physical systems with emergent collective computational abilities.
  • J. Hopfield
  • Computer Science, Medicine
    Proceedings of the National Academy of Sciences of the United States of America
  • 1982
TLDR
A model of a system having a large number of simple equivalent components, based on aspects of neurobiology but readily adapted to integrated circuits, produces a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size.
An approach to reservoir computing design and training
TLDR
RCDESIGN combines an evolutionary algorithm with reservoir computing and simultaneously looks for the best values of parameters, topology and weight matrices without rescaling the reservoir matrix by the spectral radius.
Invertible generalized synchronization: A putative mechanism for implicit learning in neural systems.
TLDR
A general and biologically feasible learning framework that utilizes invertible generalized synchronization (IGS), which supports the notion that biological neural networks can learn the dynamic nature of their environment through the mechanism of IGS.
...
1
2
3
4
5
...