Spiker: an FPGA-optimized Hardware accelerator for Spiking Neural Networks

  title={Spiker: an FPGA-optimized Hardware accelerator for Spiking Neural Networks},
  author={Alessio Carpegna and Alessandro Savino and Stefano Di Carlo},
  journal={2022 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)},
Spiking Neural Networks (SNN) are an emerging type of biologically plausible and efficient Artificial Neural Network (ANN). This work presents the development of a hardware accelerator for a SNN for high-performance inference, targeting a Xilinx Artix-7 Field Programmable Gate Array (FPGA). The model used inside the neuron is the Leaky Integrate and Fire (LIF). The execution is clock-driven, meaning that the internal state of the neuron is updated at every clock cycle, even in absence of spikes… 

Figures and Tables from this paper

Fast Exploration of the Impact of Precision Reduction on Spiking Neural Networks

This work employs an Interval Arithmetic model to develop an exploration methodology that takes advantage of the capability of such a model to propagate the approximation error to detect when the approximation exceeds tolerable limits by the application.

Prediction of the Impact of Approximate Computing on Spiking Neural Networks via Interval Arithmetic

This work first extracts the computation flow of an SNN, then employs Interval Arithmetic (IA) to model the propagation of the approximation error, which enables a quick evaluation of the impact of approximation.



The Spike-Timing Dependence of Plasticity

The MNIST Database of Handwritten Digit Images for Machine Learning Research [Best of the Web]

  • L. Deng
  • Computer Science
    IEEE Signal Processing Magazine
  • 2012
In this issue, “Best of the Web” presents the modified National Institute of Standards and Technology (MNIST) resources, consisting of a collection of handwritten digit images used extensively in

Unsupervised learning of digit recognition using spike-timing-dependent plasticity

A SNN for digit recognition which is based on mechanisms with increased biological plausibility, i.e., conductance-based instead of current-based synapses, spike-timing-dependent plasticity with time-dependent weight change, lateral inhibition, and an adaptive spiking threshold is presented.

Darwin: a neuromorphic hardware co-processor based on Spiking Neural Networks

The Darwin Neural Processing Unit (NPU) is presented, a neuromorphic hardware co-processor based on SNN implemented with digitallogic, supporting a maximum of 2048 neurons, 20482 = 4194304 synapses, and 15 possible synaptic delays.

A Fast and Energy-Efficient SNN Processor With Adaptive Clock/Event-Driven Computation Scheme and Online Learning

The proposed SNN processor is suitable for real-time and energy-constrained applications, and achieves computation time of 3.15 ms/image and online learning energy consumption of 0.297 for the MNIST 10-class dataset.

FSpiNN: An Optimization Framework for Memory-Efficient and Energy-Efficient Spiking Neural Networks

FSpiNN is proposed, an optimization framework for obtaining memory-efficient and energy-efficient SNNs for training and inference processing, with unsupervised learning capability while maintaining accuracy, by reducing the computational requirements of neuronal and STDP operations, and improving the accuracy of STDP-based learning.

Deep Learning for Edge Computing: Current Trends, Cross-Layer Optimizations, and Open Research Challenges

The current trends of such optimizations for deep learning have to be performed at both software and hardware levels are surveyed and key open research mid-term and long-term challenges are discussed.

Brian 2: an intuitive and efficient neural simulator

“Brian” 2 is a complete rewrite of Brian that addresses this issue by using runtime code generation with a procedural equation-oriented approach, and enables scientists to write code that is particularly simple and concise, closely matching the way they conceptualise their models.

Selection and Optimization of Temporal Spike Encoding Methods for Spiking Neural Networks

This paper proposes a methodology of a three-step encoding workflow: method selection by signal characteristics, parameter optimization by error metrics between original and reconstructed signals, and validation by comparison of the original signal and the encoded spike train.

Industry 4.0: A survey on technologies, applications and open research issues

  • Yang Lu
  • Computer Science
    J. Ind. Inf. Integr.
  • 2017