A time-to-first-spike coding and conversion aware training for energy-efficient deep spiking neural network processor design

@article{Lew2022ATC,
  title={A time-to-first-spike coding and conversion aware training for energy-efficient deep spiking neural network processor design},
  author={Dongwoo Lew and Kyungchul Lee and Jongsun Park},
  journal={Proceedings of the 59th ACM/IEEE Design Automation Conference},
  year={2022}
}
In this paper, we present an energy-efficient SNN architecture, which can seamlessly run deep spiking neural networks (SNNs) with improved accuracy. First, we propose a conversion aware training (CAT) to reduce ANN-to-SNN conversion loss without hardware implementation overhead. In the proposed CAT, the activation function developed for simulating SNN during ANN training, is efficiently exploited to reduce the data representation error after conversion. Based on the CAT technique, we also… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 16 REFERENCES

T2FSNN: Deep Spiking Neural Networks with Time-to-first-spike Coding

T2FSNN is presented, which introduces the concept of time-to-first-spike coding into deep SNNs using the kernel-based dynamic threshold and dendrite to overcome the aforementioned drawback and proposes gradient-based optimization and early firing methods to further increase the efficiency of the T1FSNN.

Direct Training for Spiking Neural Networks: Faster, Larger, Better

This work proposes a neuron normalization technique to adjust the neural selectivity and develops a direct learning algorithm for deep SNNs and presents a Pytorch-based implementation method towards the training of large-scale Snns.

SpinalFlow: An Architecture and Dataflow Tailored for Spiking Neural Networks

A novel SNN architecture, SpinalFlow, that processes a compressed, time-stamped, sorted sequence of input spikes, which shows that, depending on the level of observed sparsity, SNN architectures can be competitive with ANN architectures in terms of latency and energy for inference, thus lowering the barrier for practical deployment in scenarios demanding real-time learning.

Spiking Neural Networks Hardware Implementations and Challenges

This survey presents the state of the art of hardware implementations of spiking neural networks and the current trends in algorithm elaboration from model selection to training mechanisms and describes the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.

Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification

This paper shows conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset.

Tianjic: A Unified and Scalable Chip Bridging Spike-Based and Continuous Neural Computation

A unified model description framework and a unified processing architecture (Tianjic), which covers the full stack from software to hardware, and a compatible routing infrastructure that enables homogeneous and heterogeneous scalability on a decentralized many-core network.

Unsupervised learning of digit recognition using spike-timing-dependent plasticity

A SNN for digit recognition which is based on mechanisms with increased biological plausibility, i.e., conductance-based instead of current-based synapses, spike-timing-dependent plasticity with time-dependent weight change, lateral inhibition, and an adaptive spiking threshold is presented.

Loihi: A Neuromorphic Manycore Processor with On-Chip Learning

Loihi is a 60-mm2 chip fabricated in Intels 14-nm process that advances the state-of-the-art modeling of spiking neural networks in silicon, and can solve LASSO optimization problems with over three orders of magnitude superior energy-delay-product compared to conventional solvers running on a CPU iso-process/voltage/area.

Efficient Hardware Acceleration of CNNs using Logarithmic Data Representation with Arbitrary log-base

The presented method works without retraining of the neural network and therefore is suitable for applications in which no labeled training data is available and the hardware efficiency is evaluated in terms of FPGA utilization and energy requirements in comparison to regular 8-bit-fixed-point multiplier based acceleration.

TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip

This work developed TrueNorth, a 65 mW real-time neurosynaptic processor that implements a non-von Neumann, low-power, highly-parallel, scalable, and defect-tolerant architecture, and successfully demonstrated the use of TrueNorth-based systems in multiple applications, including visual object recognition.