Flexon: A Flexible Digital Neuron for Efficient Spiking Neural Network Simulations

@article{Lee2018FlexonAF,
  title={Flexon: A Flexible Digital Neuron for Efficient Spiking Neural Network Simulations},
  author={Dayeol Lee and Gwangmu Lee and Dongup Kwon and Sunghwa Lee and Youngsok Kim and Jangwoo Kim},
  journal={2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA)},
  year={2018},
  pages={275-288}
}
  • Dayeol Lee, Gwangmu Lee, Jangwoo Kim
  • Published 1 June 2018
  • Computer Science, Biology
  • 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA)
Spiking Neural Networks (SNNs) play an important role in neuroscience as they help neuroscientists understand how the nervous system works. To model the nervous system, SNNs incorporate the concept of time into neurons and inter-neuron interactions called spikes; a neuron's internal state changes with respect to time and input spikes, and a neuron fires an output spike when its internal state satisfies certain conditions. As the neurons forming the nervous system behave differently, SNN… 
FlexLearn: Fast and Highly Efficient Brain Simulations Using Flexible On-Chip Learning
TLDR
FlexLearn is presented, a flexible on-chip learning engine to enable fast and highly efficient brain simulations and an example flexible brain simulation processor by integrating the datapaths with the state-of-the-art flexible digital neuron and existing accelerator to support end-to-end simulations.
Spiking Neural Networks in Spintronic Computational RAM
TLDR
A promising alternative to overcome scalability limitations is proposed, based on a network of in-memory SNN accelerators, which can reduce the energy consumption by up to 150.25= when compared to a representative ASIC solution.
Low-Cost Adaptive Exponential Integrate-and-Fire Neuron Using Stochastic Computing
TLDR
Experimental results show that the proposed low-cost adaptive exponential integrate-and-fire neuron can precisely reproduce wide range biological behaviors as the original model, with higher computational performance and lower hardware cost against state-of-the-art AdEx hardware neurons.
An Inference and Learning Engine for Spiking Neural Networks in Computational RAM (CRAM)
TLDR
This work proposes a promising alternative, an in-memory SNN accelerator based on Spintronic Computational RAM (CRAM) to overcome scalability limitations, which can reduce the energy consumption by up to 164.1$\times$ when compared to a representative ASIC solution.
NEBULA: A Neuromorphic Spin-Based Ultra-Low Power Architecture for SNNs and ANNs
  • Sonali Singh, Anup Sarma, C. Das
  • Computer Science
    2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA)
  • 2020
TLDR
This paper proposes a comprehensive design spanning across the device, circuit, architecture and algorithm levels to build an ultra low-power architecture for SNN and ANN inference, using spintronics-based magnetic tunnel junction devices that have been shown to function as both neuro-synaptic crossbars as well as thresholding neurons and can operate at ultra low voltage and current levels.
Even Faster SNN Simulation with Lazy+Event-driven Plasticity and Shared Atomics
TLDR
Two novel optimizations that accelerate clock-based spiking neural network (SNN) simulators are presented and represent the final evolutionary stages of years of iteration on STDP and spike delivery inside "Spice" (/spaIk/), the state of the art SNN simulator.
A Digital Hardware System for Spiking Network of Tactile Afferents
TLDR
Applying machine learning algorithms on the artificial spiking patterns collected from FPGA, the proposed neuromorphic system provides the opportunity for development of new tactile processing component for robotic and prosthetic applications.
An Energy-Quality Scalable STDP Based Sparse Coding Processor With On-Chip Learning Capability
TLDR
This article designed and implemented the hardware of the STDP based sparse coding using 65nm CMOS process, and the proposed SNN architecture can dynamically trade off algorithmic quality for computation energy for Natural image and MNIST applications.
Multi-GPU SNN Simulation with Perfect Static Load Balancing
We present a SNN simulator which scales to millions of neurons, billions of synapses, and 8 GPUs. This is made possible by 1) a novel, cache-aware spike transmission algorithm 2) a model parallel
Multi-GPU SNN Simulation with Static Load Balancing
  • Antonis A. Argyros
  • Computer Science
    2021 International Joint Conference on Neural Networks (IJCNN)
  • 2021
We present a SNN simulator which scales to millions of neurons, billions of synapses, and 8 GPUs. This is made possible by 1) a novel, cache-aware spike transmission algorithm 2) a model parallel
...
...

References

SHOWING 1-10 OF 61 REFERENCES
Limits to high-speed simulations of spiking neural networks using general-purpose computers
TLDR
To study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters, however, to run simulations substantially faster than real-time, special hardware is a prerequisite.
NeMo: A Platform for Neural Modelling of Spiking Neurons Using GPUs
TLDR
NeMo is presented, a platform for real-time spiking neural networks simulations which achieves high performance through the use of highly parallel commodity hardware in the form of graphics processing units (GPUs).
NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors
TLDR
With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation and supports the spike-timing-dependent plasticity (STDP) rule for learning.
An efficient automated parameter tuning framework for spiking neural networks
TLDR
The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier.
Cognitive computing building block: A versatile and efficient digital neuron model for neurosynaptic cores
TLDR
A simple, digital, reconfigurable, versatile spiking neuron model that supports one-to-one equivalence between hardware and simulation and is implementable using only 1272 ASIC gates is developed.
Spiking Neural Networks
TLDR
A state-of-the-art review of the development of spiking neurons and SNNs is presented, and insight into their evolution as the third generation neural networks is provided.
Networks of spiking neurons: the third generation of neural network models
  • W. Maas
  • Biology, Computer Science
  • 1997
Neuromorphic hardware in the loop: Training a deep spiking network on the BrainScaleS wafer-scale system
TLDR
This paper demonstrates how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate, and shows that deep spiking networks emulated on analog neuromorphic devices can attain good computational performance despite the inherent variations of the Analog substrate.
An Efficient Simulation Environment for Modeling Large-Scale Cortical Processing
TLDR
A spiking neural network simulator, which is both easy to use and computationally efficient, for the generation of large-scale computational neuroscience models and implements current or conductance based Izhikevich neuron networks, having spike-timing dependent plasticity and short-term plasticity.
...
...