Strategy and Benchmark for Converting Deep Q-Networks to Event-Driven Spiking Neural Networks

@article{Tan2021StrategyAB,
  title={Strategy and Benchmark for Converting Deep Q-Networks to Event-Driven Spiking Neural Networks},
  author={Weihao Tan and Devdhar Patel and Robert Thijs Kozma},
  journal={ArXiv},
  year={2021},
  volume={abs/2009.14456}
}
Spiking neural networks (SNNs) have great potential for energy-efficient implementation of Deep Neural Networks (DNNs) on dedicated neuromorphic hardware. Recent studies demonstrated competitive performance of SNNs compared with DNNs on image classification tasks, including CIFAR-10 and ImageNet data. The present work focuses on using SNNs in combination with deep reinforcement learning in ATARI games, which involves additional complexity as compared to image classification. We review the… 

Figures and Tables from this paper

Human-Level Control through Directly-Trained Deep Spiking Q-Networks

TLDR
This work is the first one to achieve state-of-the-art performance on multiple Atari games with the directly trained SNN and proposes a directly trained DSRL architecture based on the leaky integrate-and-fire neurons and deep Q -network (DQN).

Deep Reinforcement Learning with Spiking Q-learning

TLDR
The deep spiking Q-network (DSQN) is proposed, using the membrane voltage of nonspiking neurons as the representation of Q-value, which can directly learn robust policies from highdimensional sensory inputs using end-to-end RL.

Training Spiking Neural Networks for Reinforcement Learning Tasks With Temporal Coding Method

TLDR
A self-incremental variable is introduced to push each spiking neuron to fire, which makes SNNs fully differentiable, and an encoding method is proposed to solve the problem of information loss of temporal-coded inputs.

BrainCog: A Spiking Neural Network based Brain-inspired Cognitive Intelligence Engine for Brain-inspired AI and Brain Simulation

TLDR
The Brain-inspired Cognitive Intelligence Engine (BrainCog) is presented, which incorporates different types of spiking neuron models, learning rules, brain areas, etc., as essential modules provided by the platform and supports various brain-inspired cognitive functions.

Multi-Sacle Dynamic Coding Improved Spiking Actor Network for Reinforcement Learning

TLDR
This work proposes a multiscale dynamic coding improved spiking actor network (MDC-SAN), a significant attempt to improve SNNs from the perspective of efficient coding towards effective decision-making, just like that in biological networks.

Solving the spike feature information vanishing problem in spiking deep Q network with potential based normalization

TLDR
This study mathematically analyzed the problem of the disappearance of spiking signal features in SDQN and proposed a potential-based layer normalization (pbLN) method to train spiking deep Q networks directly.

Efficient and Accurate Conversion of Spiking Neural Network with Burst Spikes

TLDR
A neuron model for releasing burst spikes, a cheap but highly efficient method to solve residual information, and Lateral Inhibition Pooling (LIPooling) is proposed to solve the inaccuracy problem caused by MaxPooling in the conversion process.

One Timestep is All You Need: Training Spiking Neural Networks with Ultra Low Latency

TLDR
The proposed IIR-SNNs provide 25-33X higher energy efficiency, while being comparable to them in classification performance, and perform inference with 52500X reduced latency compared to other state-of-the-art SNNs, maintaining comparable or even better accuracy.

Population-coding and Dynamic-neurons improved Spiking Actor Network for Reinforcement Learning

TLDR
This work proposes a multiscale dynamic coding improved spiking actor network (MDC-SAN) for reinforcement learning to achieve effective decision-making, in an attempt to improve SNNs from the perspective of efficient coding towardseffective decision- making, just like that in biological networks.

References

SHOWING 1-10 OF 31 REFERENCES

Direct Training for Spiking Neural Networks: Faster, Larger, Better

TLDR
This work proposes a neuron normalization technique to adjust the neural selectivity and develops a direct learning algorithm for deep SNNs and presents a Pytorch-based implementation method towards the training of large-scale Snns.

Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing

TLDR
The method for converting an ANN into an SNN enables low-latency classification with high accuracies already after the first output spike, and compared with previous SNN approaches it yields improved performance without increased training time.

Training Deep Spiking Neural Networks Using Backpropagation

TLDR
A novel technique is introduced, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise, which enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membranes potentials.

Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification

TLDR
This paper shows conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset.

Enabling Spike-Based Backpropagation for Training Deep Neural Network Architectures

TLDR
This work proposes an approximate derivative method that accounts for the leaky behavior of LIF neurons that enables training deep convolutional SNNs directly (with input spike events) using spike-based backpropagation and analyze sparse event-based computations to demonstrate the efficacy of the proposed SNN training method for inference operation in the spiking domain.

Going Deeper in Spiking Neural Networks: VGG and Residual Architectures

TLDR
A novel algorithmic technique is proposed for generating an SNN with a deep architecture with significantly better accuracy than the state-of-the-art, and its effectiveness on complex visual recognition problems such as CIFAR-10 and ImageNet is demonstrated.

Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition

TLDR
A novel approach for converting a deep CNN into a SNN that enables mapping CNN to spike-based hardware architectures and evaluates the resulting SNN on publicly available Defense Advanced Research Projects Agency (DARPA) Neovision2 Tower and CIFAR-10 datasets and shows similar object recognition accuracy as the original CNN.

Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines

TLDR
An event-driven random backpropagation (eRBP) rule is demonstrated that uses an error-modulated synaptic plasticity rule for learning deep representations in neuromorphic computing hardware, achieving nearly identical classification accuracies compared to artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.

Conversion of artificial recurrent neural networks to spiking neural networks for low-power neuromorphic hardware

TLDR
Surprisingly, it is found that short synaptic delays are sufficient to implement the dynamic (temporal) aspect of the RNN in the question classification task and the discretization of the neural activities is beneficial to the train-and-constrain approach.