• Corpus ID: 237940442

Training Spiking Neural Networks Using Lessons From Deep Learning

@article{Eshraghian2021TrainingSN,
  title={Training Spiking Neural Networks Using Lessons From Deep Learning},
  author={Jason Kamran Eshraghian and Max Ward and Emre O. Neftci and Xinxin Wang and Gregor Lenz and Girish Dwivedi and Bennamoun and Doo Seok Jeong and Wei D. Lu},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.12894}
}
The brain is the perfect place to look for inspiration to develop more efficient neural networks. The inner workings of our synapses and neurons provide a glimpse at what the future of deep learning might look like. This paper shows how to apply the lessons learnt from several decades of research in deep learning, gradient descent, backpropagation and neuroscience to biologically plausible spiking neural neural networks. This paper explores the delicate interplay between encoding data as spikes… 

Figures and Tables from this paper

Navigating Local Minima in Quantized Spiking Neural Networks
TLDR
A systematic evaluation of a cosineannealed LR schedule coupled with weight-independent adaptive moment estimation as applied to Quantized SNNs (QSNNs) is presented, demonstrating (close to) state-of-the-art performance on the more complex datasets.
The fine line between dead neurons and sparsity in binarized spiking neural networks
TLDR
This paper proposes the use of ‘threshold annealing’ as a warm-up method for firing thresholds and shows it enables the propagation of spikes across multiple layers where neurons would otherwise cease to fire, and in doing so, achieves highly competitive results on four diverse datasets, despite using binarized weights.
SPICEprop: Backpropagating Errors Through Memristive Spiking Neural Networks
TLDR
A fully memristive spiking neural network (MSNN) consisting of novel Memristive neurons trained using the backpropagation through time (BPTT) learning rule is presented, achieving the highest accuracies among all fully MSNNs.
A Fully Memristive Spiking Neural Network with Unsupervised Learning
TLDR
The proposed MSNN uniquely implements STDP learning by using cumulative weight changes in memristive synapses from the voltage waveform changes across the synapses, which arise from the presynaptic and postsynaptic spiking voltage signals during the training process.
Efficient GPU training of LSNNs using eProp
TLDR
It is demonstrated that SNN classifiers implemented using GeNN and trained using the eProp learning rule can provide comparable performance to those trained using Back Propagation Through Time and that the latency and energy usage of the classifiers is up to 7 × lower than an LSTM running on the same GPU hardware.
NeuroPack: An Algorithm-Level Python-Based Simulator for Memristor-Empowered Neuro-Inspired Computing
TLDR
NeuroPack is presented, a modular, algorithm-level Python-based simulation platform that can support studies of memristor neuro-inspired architectures for performing online learning or offline classification and its hierarchical structure empowers NeuroPack to predict any Memristor state changes and the corresponding neural network behavior across a variety of design decisions and user parameter options.
Design Space Exploration of Dense and Sparse Mapping Schemes for RRAM Architectures
TLDR
This paper presents an extended Design Space Exploration (DSE) methodology to quantify the benefits and limitations of dense and sparse mapping schemes for a variety of network architectures and presents a case study quantifying and formalizing the trade-offs of typical non-idealities introduced into 1-Transistor-1-Resistor (1T1R) tiled memristive architectures and the size of modular crossbar tiles using the CIFAR-10 dataset.
Adaptive, Unlabeled and Real-time Approximate-Learning Platform (AURA) for Personalized Epileptic Seizure Forecasting
TLDR
AURA is an Adaptive forecasting model trained with Unlabeled, Real-time data using internally generated Approximate labels on-the-fly, coupled together such that the detection model generates labels automatically, which are then used to train the prediction model.
Revisiting Batch Normalization for Training Low-Latency Deep Spiking Neural Networks From Scratch
TLDR
A temporal Batch Normalization Through Time (BNTT) technique is proposed and it is found that varying the BN parameters at every time-step allows the model to learn the time-varying input distribution better.

References

SHOWING 1-10 OF 201 REFERENCES
A solution to the learning dilemma for recurrent networks of spiking neurons
TLDR
The resulting learning method – called e-prop – approaches the performance of BPTT (backpropagation through time), the best known method for training recurrent neural networks in machine learning, and is biologically plausible.
Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE)
TLDR
DECOLLE networks provide continuously learning machines that are relevant to biology and supportive of event-based, low-power computer vision architectures matching the accuracies of conventional computers on tasks where temporal precision and speed are essential.
Spiking Deep Networks with LIF Neurons
TLDR
This work demonstrates that biologically-plausible spiking LIF neurons can be integrated into deep networks can perform as well as other spiking models (e.g. integrate-and-fire), and provides new methods for training deep networks to run on neuromorphic hardware.
Temporal Coding in Spiking Neural Networks with Alpha Synaptic Function
TLDR
This work proposes a spiking neural network model that encodes information in the relative timing of individual neuron spikes and performs classification using the first output neuron to spike, and successfully train the network on the MNIST dataset encoded in time.
Deep Learning With Spiking Neurons: Opportunities and Challenges
TLDR
This review addresses the opportunities that deep spiking networks offer and investigates in detail the challenges associated with training SNNs in a way that makes them competitive with conventional deep learning, but simultaneously allows for efficient mapping to hardware.
Spike-Timing-Dependent Back Propagation in Deep Spiking Neural Networks
TLDR
This work proposes a simple yet efficient Rectified Linear Postsynaptic Potential function for spiking neurons and proposes a Spike-Timing-Dependent Back-Propagation (STDBP) learning algorithm for DSNNs, investigating the contribution of dynamics in spike timing to information encoding, synaptic plasticity and decision making.
Event-driven random backpropagation: Enabling neuromorphic deep learning machines
TLDR
An event-driven random backpropagation (eRBP) rule is demonstrated that uses an error-modulated synaptic plasticity rule for learning deep representations in neuromorphic computing hardware, achieving nearly identical classification accuracies compared to artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.
Towards Biologically Plausible Deep Learning
TLDR
The theory about the probabilistic interpretation of auto-encoders is extended to justify improved sampling schemes based on the generative interpretation of denoising auto- Encoder, and these ideas are validated on generative learning tasks.
Backpropagation and the brain
TLDR
It is argued that the key principles underlying backprop may indeed have a role in brain function and induce neural activities whose differences can be used to locally approximate these signals and hence drive effective learning in deep networks in the brain.
Gradient Descent for Spiking Neural Networks
TLDR
A gradient descent method for optimizing spiking network models by introducing a differentiable formulation of spiking networks and deriving the exact gradient calculation offers a general purpose supervised learning algorithm for spiking neural networks, thus advancing further investigations on spike-based computation.
...
1
2
3
4
5
...