A dual-memory architecture for reinforcement learning on neuromorphic platforms

@article{OlinAmmentorp2021ADA,
  title={A dual-memory architecture for reinforcement learning on neuromorphic platforms},
  author={Wilkie Olin-Ammentorp and Yury Sokolov and Maxim Bazhenov},
  journal={Neuromorphic Computing and Engineering},
  year={2021},
  volume={1}
}
Reinforcement learning (RL) is a foundation of learning in biological systems and provides a framework to address numerous challenges with real-world artificial intelligence applications. Efficient implementations of RL techniques could allow for agents deployed in edge-use cases to gain novel abilities, such as improved navigation, understanding complex situations and critical decision making. Toward this goal, we describe a flexible architecture to carry out RL on neuromorphic platforms. This… Expand

References

SHOWING 1-10 OF 68 REFERENCES
Reinforcement Learning, an Introduction 2nd edn (Cambridge, MA: MIT Press) p 552 Available from: http://incompleteideas.net/book/the-book.html
  • 2018
A toolbox for neuromorphic sensing in robotics
TLDR
This initiative is meant to stimulate and facilitate robotic integration of neuromorphic AI, with the opportunity to adapt traditional off-the-shelf sensors to spiking neural nets within one of the most powerful robotic tools, ROS. Expand
Advancing Neuromorphic Computing With Loihi: A Survey of Results and Outlook
TLDR
This survey reviews results that are obtained to date with Loihi across the major algorithmic domains under study, including deep learning approaches and novel approaches that aim to more directly harness the key features of spike-based neuromorphic hardware. Expand
Comparison of Artificial and Spiking Neural Networks on Digital Hardware
TLDR
It is shown that most rate-coded spiking network implementations will not be more energy or resource efficient than the original ANN, concluding that more imaginative uses of spikes are required to displace conventional ANNs as the dominant computing framework for neural computation. Expand
A comparison of Vector Symbolic Architectures
TLDR
A taxonomy of available binding/unbinding operations is created and an important ramification for non self-inverse binding operations is shown using an example from analogical reasoning, to support the selection of an appropriate VSA for a particular task. Expand
A complementary learning systems approach to temporal difference learning
TLDR
A novel algorithm known as Complementary Temporal Difference Learning (CTDL), which combines a DNN with a Self-Organizing Map (SOM) to obtain the benefits of both a �’neocortical’ and a ’hippocampal’ system is proposed. Expand
A system hierarchy for brain-inspired computing.
TLDR
This study proposes 'neuromorphic completeness', which relaxes the requirement for hardware completeness, and proposes a corresponding system hierarchy, which consists of a Turing-complete software-abstraction model and a versatile abstract neuromorphic architecture. Expand
Deep Reinforcement Learning for Autonomous Driving: A Survey
TLDR
This review summarises deep reinforcement learning algorithms, provides a taxonomy of automated driving tasks where (D)RL methods have been employed, highlights the key challenges algorithmically as well as in terms of deployment of real world autonomous driving agents, the role of simulators in training agents, and finally methods to evaluate, test and robustifying existing solutions in RL and imitation learning. Expand
Hippocampal replay of experience at real-world speeds
TLDR
A state space model is developed that uses a combination of movement dynamics of different speeds to capture the spatial content and time evolution of replay during sharp-wave ripple events and finds that the large majority of replay events contain spatially coherent, interpretable content. Expand
Online Few-Shot Gesture Learning on a Neuromorphic Processor
TLDR
The Surrogate-gradient Online Error-triggered Learning (SOEL) system for online few-shot learning on neuromorphic processors uses a combination of transfer learning and principles of computational neuroscience and deep learning, and shows that partially trained deep Spiking Neural Networks implemented on neuromorph hardware can rapidly adapt online to new classes of data within a domain. Expand
...
1
2
3
4
5
...