# Flexon: A Flexible Digital Neuron for Efficient Spiking Neural Network Simulations

@article{Lee2018FlexonAF,
title={Flexon: A Flexible Digital Neuron for Efficient Spiking Neural Network Simulations},
author={Dayeol Lee and Gwangmu Lee and Dongup Kwon and Sunghwa Lee and Youngsok Kim and Jangwoo Kim},
journal={2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA)},
year={2018},
pages={275-288}
}
• Published 1 June 2018
• Computer Science, Biology
• 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA)
Spiking Neural Networks (SNNs) play an important role in neuroscience as they help neuroscientists understand how the nervous system works. To model the nervous system, SNNs incorporate the concept of time into neurons and inter-neuron interactions called spikes; a neuron's internal state changes with respect to time and input spikes, and a neuron fires an output spike when its internal state satisfies certain conditions. As the neurons forming the nervous system behave differently, SNN…
15 Citations

## Figures and Tables from this paper

FlexLearn: Fast and Highly Efficient Brain Simulations Using Flexible On-Chip Learning
• Computer Science
MICRO
• 2019
FlexLearn is presented, a flexible on-chip learning engine to enable fast and highly efficient brain simulations and an example flexible brain simulation processor by integrating the datapaths with the state-of-the-art flexible digital neuron and existing accelerator to support end-to-end simulations.
Spiking Neural Networks in Spintronic Computational RAM
• Computer Science
ACM Trans. Archit. Code Optim.
• 2021
A promising alternative to overcome scalability limitations is proposed, based on a network of in-memory SNN accelerators, which can reduce the energy consumption by up to 150.25= when compared to a representative ASIC solution.
Low-Cost Adaptive Exponential Integrate-and-Fire Neuron Using Stochastic Computing
• Computer Science
IEEE Transactions on Biomedical Circuits and Systems
• 2020
Experimental results show that the proposed low-cost adaptive exponential integrate-and-fire neuron can precisely reproduce wide range biological behaviors as the original model, with higher computational performance and lower hardware cost against state-of-the-art AdEx hardware neurons.
An Inference and Learning Engine for Spiking Neural Networks in Computational RAM (CRAM)
• Computer Science
ArXiv
• 2020
This work proposes a promising alternative, an in-memory SNN accelerator based on Spintronic Computational RAM (CRAM) to overcome scalability limitations, which can reduce the energy consumption by up to 164.1$\times$ when compared to a representative ASIC solution.
NEBULA: A Neuromorphic Spin-Based Ultra-Low Power Architecture for SNNs and ANNs
• Computer Science
2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA)
• 2020
This paper proposes a comprehensive design spanning across the device, circuit, architecture and algorithm levels to build an ultra low-power architecture for SNN and ANN inference, using spintronics-based magnetic tunnel junction devices that have been shown to function as both neuro-synaptic crossbars as well as thresholding neurons and can operate at ultra low voltage and current levels.
Even Faster SNN Simulation with Lazy+Event-driven Plasticity and Shared Atomics
• Computer Science
2021 IEEE High Performance Extreme Computing Conference (HPEC)
• 2021
Two novel optimizations that accelerate clock-based spiking neural network (SNN) simulators are presented and represent the final evolutionary stages of years of iteration on STDP and spike delivery inside "Spice" (/spaIk/), the state of the art SNN simulator.
A Digital Hardware System for Spiking Network of Tactile Afferents
• Computer Science
Frontiers in Neuroscience
• 2019
Applying machine learning algorithms on the artificial spiking patterns collected from FPGA, the proposed neuromorphic system provides the opportunity for development of new tactile processing component for robotic and prosthetic applications.
An Energy-Quality Scalable STDP Based Sparse Coding Processor With On-Chip Learning Capability
• Computer Science
IEEE Transactions on Biomedical Circuits and Systems
• 2020
This article designed and implemented the hardware of the STDP based sparse coding using 65nm CMOS process, and the proposed SNN architecture can dynamically trade off algorithmic quality for computation energy for Natural image and MNIST applications.
Multi-GPU SNN Simulation with Perfect Static Load Balancing
• Computer Science
ArXiv
• 2021
We present a SNN simulator which scales to millions of neurons, billions of synapses, and 8 GPUs. This is made possible by 1) a novel, cache-aware spike transmission algorithm 2) a model parallel
Multi-GPU SNN Simulation with Static Load Balancing
• Antonis A. Argyros
• Computer Science
2021 International Joint Conference on Neural Networks (IJCNN)
• 2021
We present a SNN simulator which scales to millions of neurons, billions of synapses, and 8 GPUs. This is made possible by 1) a novel, cache-aware spike transmission algorithm 2) a model parallel

## References

SHOWING 1-10 OF 61 REFERENCES
Limits to high-speed simulations of spiking neural networks using general-purpose computers
• Computer Science
Front. Neuroinform.
• 2014
To study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters, however, to run simulations substantially faster than real-time, special hardware is a prerequisite.
NeMo: A Platform for Neural Modelling of Spiking Neurons Using GPUs
• Computer Science, Biology
2009 20th IEEE International Conference on Application-specific Systems, Architectures and Processors
• 2009
NeMo is presented, a platform for real-time spiking neural networks simulations which achieves high performance through the use of highly parallel commodity hardware in the form of graphics processing units (GPUs).
NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors
• Computer Science
Front. Neurosci.
• 2016
With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation and supports the spike-timing-dependent plasticity (STDP) rule for learning.
An efficient automated parameter tuning framework for spiking neural networks
• Computer Science
Front. Neurosci.
• 2014
The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier.
Cognitive computing building block: A versatile and efficient digital neuron model for neurosynaptic cores
• Computer Science
The 2013 International Joint Conference on Neural Networks (IJCNN)
• 2013
A simple, digital, reconfigurable, versatile spiking neuron model that supports one-to-one equivalence between hardware and simulation and is implementable using only 1272 ASIC gates is developed.
Spiking Neural Networks
• Computer Science, Biology
Int. J. Neural Syst.
• 2009
A state-of-the-art review of the development of spiking neurons and SNNs is presented, and insight into their evolution as the third generation neural networks is provided.
Neuromorphic hardware in the loop: Training a deep spiking network on the BrainScaleS wafer-scale system
• Computer Science
2017 International Joint Conference on Neural Networks (IJCNN)
• 2017
This paper demonstrates how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate, and shows that deep spiking networks emulated on analog neuromorphic devices can attain good computational performance despite the inherent variations of the Analog substrate.
An Efficient Simulation Environment for Modeling Large-Scale Cortical Processing
• Computer Science, Biology
Front. Neuroinform.
• 2011
A spiking neural network simulator, which is both easy to use and computationally efficient, for the generation of large-scale computational neuroscience models and implements current or conductance based Izhikevich neuron networks, having spike-timing dependent plasticity and short-term plasticity.