Training Multilayer Spiking Neural Networks using NormAD based Spatio-Temporal Error Backpropagation

@article{Anwani2020TrainingMS,
  title={Training Multilayer Spiking Neural Networks using NormAD based Spatio-Temporal Error Backpropagation},
  author={Navin Anwani and Bipin Rajendran},
  journal={ArXiv},
  year={2020},
  volume={abs/1811.10678}
}

Figures and Tables from this paper

Back-Propagation Learning in Deep Spike-By-Spike Networks
TLDR
A learning rule for feed-forward SbS networks is derived that approaches the benchmark results of ANNs without extensive parameter optimization and is envisioned to provide a new basis for research in neuroscience and for technical applications, especially when they become implemented on specialized computational hardware.
Back-propagation learning in deep Spike-By-Spike networks
TLDR
A learning rule for hierarchically organized SbS networks is derived, inspired by the error back-propagation algorithms used in feed-forward neural networks, which reaches values achieved by standard deep networks that have a similar structure and also use only simple gradient descent.
Biologically plausible learning in a deep recurrent spiking network
TLDR
A framework consisting of mutually coupled local circuits of spiking neurons that meets a fundamental property of the brain and will enable investigations of very large network architectures far beyond current DCNs, including also large scale models of cortex where areas consisting of many local circuits form a complex cyclic network.
Exploring the Effects of Caputo Fractional Derivative in Spiking Neural Network Training
TLDR
An extensive investigation of performance improvements via a case study of small-scale networks using derivative orders in the unit interval and statistics show that a range of derivative orders can be determined where the Caputo derivative outperforms first-order gradient descent with high confidence.
Memristors—From In‐Memory Computing, Deep Learning Acceleration, and Spiking Neural Networks to the Future of Neuromorphic and Bio‐Inspired Computing
TLDR
The case for a novel beyond‐complementary metal–oxide–semiconductor (CMOS) technology—memristors)—as a potential solution for the implementation of power‐efficient in‐memory computing, DL accelerators, and spiking neural networks is reviewed.

References

SHOWING 1-10 OF 36 REFERENCES
NormAD - Normalized Approximate Descent based supervised learning rule for spiking neurons
TLDR
It is shown that NormAD provides faster convergence than state-of-the-art supervised learning algorithms for spiking neurons, often the gain in the rate of convergence being more than a factor of 10.
An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks
TLDR
A new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP), which outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are demonstrated by the comprehensive experimental results in this paper.
Training Deep Spiking Neural Networks Using Backpropagation
TLDR
A novel technique is introduced, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise, which enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membranes potentials.
Span: Spike Pattern Association Neuron for Learning Spatio-Temporal Spike Patterns
TLDR
SPAN is presented - a spiking neuron that is able to learn associations of arbitrary spike trains in a supervised fashion allowing the processing of spatio-temporal information encoded in the precise timing of spikes.
Supervised Learning in Multilayer Spiking Neural Networks
  • I. Sporea
  • Computer Science
    Neural Computation
  • 2013
TLDR
A supervised learning algorithm for multilayer spiking neural networks that can be applied to neurons firing multiple spikes in artificial neural networks with hidden layers and results in faster convergence than existing algorithms for similar tasks such as SpikeProp.
A supervised learning approach based on STDP and polychronization in spiking neuron networks
We propose a novel network model of spiking neurons, without preimposed topology and driven by STDP (Spike-Time-Dependent Plasticity), a temporal Hebbian unsupervised learning mode, based on
SWAT: A Spiking Neural Network Training Algorithm for Classification Problems
TLDR
A synaptic weight association training (SWAT) algorithm for spiking neural networks (SNNs) that merges the Bienenstock-Cooper-Munro (BCM) learning rule with spike timing dependent plasticity (STDP) and yields a unimodal weight distribution.
Error-backpropagation in temporally encoded networks of spiking neurons
...
...