Deep Learning With Spiking Neurons: Opportunities and Challenges

  title={Deep Learning With Spiking Neurons: Opportunities and Challenges},
  author={Michael Pfeiffer and Thomas Pfeil},
  journal={Frontiers in Neuroscience},
Spiking neural networks (SNNs) are inspired by information processing in biology, where sparse and asynchronous binary signals are communicated and processed in a massively parallel fashion. SNNs on neuromorphic hardware exhibit favorable properties such as low power consumption, fast inference, and event-driven information processing. This makes them interesting candidates for the efficient implementation of deep neural networks, the method of choice for many machine learning tasks. In this… 

Figures from this paper

A Tandem Learning Rule for Efficient and Rapid Inference on Deep Spiking Neural Networks.

The spike count is considered as the discrete neural representation and design ANN neuronal activation function that can effectively approximate the spike count of the coupled SNN in a tandem learning framework that consists of a SNN and an Artificial Neural Network that share weights.

A Hybrid Learning Rule for Efficient and Rapid Inference with Spiking Neural Networks

A novel learning rule is proposed based on the hybrid neural network with shared weights, wherein a rate-based SNN is used during the forward propagation to determine precise spike counts and spike trains, and an equivalent ANN is used During error backpropagation to approximate the gradients for the coupled SNN.

Training Energy-Efficient Deep Spiking Neural Networks with Single-Spike Hybrid Input Encoding

This paper presents a training framework for low-latency energyefficient SNNs that uses a hybrid encoding scheme at the input layer in which the analog pixel values of an image are directly applied during the first timestep and a novel variant of spike temporal coding is used during subsequent timesteps.

Progressive Tandem Learning for Pattern Recognition With Deep Spiking Neural Networks

A novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition, referred to as progressive tandem learning, which allows hardware constraints, such as limited weight precision and fan-in connections, to be progressively imposed during training.

Can Deep Neural Networks be Converted to Ultra Low-Latency Spiking Neural Networks?

  • G. DattaP. Beerel
  • Computer Science
    2022 Design, Automation & Test in Europe Conference & Exhibition (DATE)
  • 2022
It is determined that SOTA conversion strategies cannot yield ultra low latency because they incorrectly assume that the DNN and SNN pre-activation values are uniformly distributed, and a new training algorithm is proposed that accurately captures these distributions, minimizing the error between the Dnn and converted SNN.

Temporal-Coded Deep Spiking Neural Network with Easy Training and Robust Performance

It is shown that the nonleaky integrate-and-fire neuron with single-spike temporal-coding is the best choice for direct-train deep SNNs and an energy-efficient phase-domain signal processing circuit is developed and proposed, which makes the direct training of SNN as efficient as DNN.

Toward Scalable, Efficient, and Accurate Deep Spiking Neural Networks With Backward Residual Connections, Stochastic Softmax, and Hybridization

Novel algorithmic techniques of modifying the SNN configuration with backward residual connections, stochastic softmax, and hybrid artificial-and-spiking neuronal activations to improve the learning ability of the training methodologies to yield competitive accuracy, while, yielding large efficiency gains over their artificial counterparts.

Scaling Deep Spiking Neural Networks with Binary Stochastic Activations

This work presents scalable deep spiking neural networks that achieve performance comparable to DNNs while achieving substantial energy benefit, and investigates extremely quantized version of these networks having binary weights and shows an energy benefit of 28x over full-precision neural networks.

Deep Spiking Neural Network with Spike Count based Learning Rule

A novel spike-based learning rule for rate-coded deep SNNs, whereby the spike count of each neuron is used as a surrogate for gradient backpropagation is introduced, which allows direct deployment to the neuromorphic hardware and supports efficient inference.



Deep Learning in Spiking Neural Networks

Training Deep Spiking Neural Networks Using Backpropagation

A novel technique is introduced, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise, which enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membranes potentials.

Conversion of artificial recurrent neural networks to spiking neural networks for low-power neuromorphic hardware

Surprisingly, it is found that short synaptic delays are sufficient to implement the dynamic (temporal) aspect of the RNN in the question classification task and the discretization of the neural activities is beneficial to the train-and-constrain approach.

Going Deeper in Spiking Neural Networks: VGG and Residual Architectures

A novel algorithmic technique is proposed for generating an SNN with a deep architecture with significantly better accuracy than the state-of-the-art, and its effectiveness on complex visual recognition problems such as CIFAR-10 and ImageNet is demonstrated.

Deep Spiking Convolutional Neural Network Trained With Unsupervised Spike-Timing-Dependent Plasticity

A deep SpiCNN, consisting of two convolutional layers trained using the unsupervised Convolutional STDP learning methodology, achieved classification accuracies of 91.1% and 97.6%, respectively, for inferring handwritten digits from the MNIST data set and a subset of natural images from the Caltech data set.

Event-driven contrastive divergence for spiking neuromorphic systems

This work presents an event-driven variation of CD to train a RBM constructed with Integrate & Fire neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms, and contributes to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.

Spiking Deep Residual Network

This work is the first time to build a SNN deeper than 40, with comparable performance to ANNs on a large-scale dataset, and a shortcut conversion model to appropriately scale continuous-valued activations to match firing rates in SNN.

Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing

The method for converting an ANN into an SNN enables low-latency classification with high accuracies already after the first output spike, and compared with previous SNN approaches it yields improved performance without increased training time.

Training Spiking Deep Networks for Neuromorphic Hardware

We describe a method to train spiking deep networks that can be run using leaky integrate-and-fire (LIF) neurons, achieving state-of-the-art results for spiking LIF networks on five datasets,

Hybrid Macro/Micro Level Backpropagation for Training Deep Spiking Neural Networks

The proposed HM2-BP algorithm achieves competitive performances surpassing those of conventional deep learning models when dealing with asynchronous spiking streams, and leads to high recognition accuracy for the 16-speaker spoken English letters of TI46 Corpus, a challenging patio-temporal speech recognition benchmark.