Neko: a Library for Exploring Neuromorphic Learning Rules

  title={Neko: a Library for Exploring Neuromorphic Learning Rules},
  author={Zixuan Zhao and Nathan Wycoff and Neil Getty and Rick L. Stevens and Fangfang Xia},
  journal={International Conference on Neuromorphic Systems 2021},
The field of neuromorphic computing is in a period of active exploration. While many tools have been developed to simulate neuronal dynamics or convert deep networks to spiking models, general software libraries for learning rules remain underexplored. This is partly due to the diverse, challenging nature of efforts to design new learning rules, which range from encoding methods to gradient approximations, from population approaches that mimic the Bayesian brain to constrained learning… 

Figures and Tables from this paper

Easy and efficient spike-based Machine Learning with mlGeNN

It is found that not only is mlGeNN vastly more convenient to use than the lower level PyGeNN interface, the new freedom to effortlessly and rapidly prototype different network architectures also gave us an unprecedented overview over how e-prop compares to other recently published results on the DVS gesture dataset across architectural details.

A solution to the learning dilemma for recurrent networks of spiking neurons

This learning method–called e-prop–approaches the performance of backpropagation through time (BPTT), the best-known method for training recurrent neural networks in machine learning and suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence.

The mnist database of handwritten digits

An improved articulated bar flail having shearing edges for efficiently shredding materials and an improved shredder cylinder with a plurality of these flails circumferentially spaced and pivotally attached to the periphery of a rotatable shaft are disclosed.

Memristor Crossbar-Based Neuromorphic Computing System: A Case Study

The results show that the hardware-based training scheme proposed in the paper can alleviate and even cancel out the majority of the noise issue and apply it to brain-state-in-a-box (BSB) neural networks.

Practical Variational Inference for Neural Networks

This paper introduces an easy-to-implement stochastic variational method (or equivalently, minimum description length loss function) that can be applied to most neural networks and revisits several common regularisers from a variational perspective.

A tutorial on adaptive MCMC

This work proposes a series of novel adaptive algorithms which prove to be robust and reliable in practice and reviews criteria and the useful framework of stochastic approximation, which allows one to systematically optimise generally used criteria.

Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.

The Bayesian brain: the role of uncertainty in neural coding and computation

A distributional code for value in dopamine-based reinforcement learning

An account of dopamine-based reinforcement learning inspired by recent artificial intelligence research on distributional reinforcement learning is proposed, suggesting that the brain represents possible future rewards not as a single mean of stochastic outcomes, as in the canonical model, but instead as a probability distribution.

Efficient and self-adaptive in-situ learning in multilayer memristor neural networks

This work monolithically integrate hafnium oxide-based memristors with a foundry-made transistor array into a multiple-layer memristor neural network and achieves competitive classification accuracy on a standard machine learning dataset.

Synaptic plasticity as Bayesian inference

This work proposes that synapses compute probability distributions over weights, not just point estimates, and derives a new set of synaptic learning rules that are derived and shown to speed up learning in neural networks.