Corpus ID: 235790750

Even Faster SNN Simulation with Lazy+Event-driven Plasticity and Shared Atomics

  title={Even Faster SNN Simulation with Lazy+Event-driven Plasticity and Shared Atomics},
  author={Dennis Bautembach and Iasonas Oikonomidis and Antonis A. Argyros},
We present two novel optimizations that accelerate clock-based spiking neural network (SNN) simulators. The first one targets spike timing dependent plasticity (STDP). It combines lazywith event-driven plasticity and efficiently facilitates the computation of preand post-synaptic spikes using bitfields and integer intrinsics. It offers higher bandwidth than eventdriven plasticity alone and achieves a 1.5×–2× speedup over our closest competitor. The second optimization targets spike delivery. We… Expand

Figures from this paper


Spike: A GPU Optimised Spiking Neural Network Simulator
This work presents the Spike simulator with three key optimisations: timestep grouping, active synapse grouping, and delay insensitivity, which massively increase the speed of executing a SNN simulation and produce a simulator which is, on a single machine, faster than currently available simulators. Expand
High Performance Simulation of Spiking Neural Network on GPGPUs
A fine-grained network representation as a flexible and compact intermediate representation (IR) for SNNs as well as the cross-population/-projection parallelism exploration to make full use of GPGPU resources are proposed. Expand
CARLsim 4: An Open Source Library for Large Scale, Biologically Detailed Spiking Neural Network Simulation using Heterogeneous Clusters
CARLsim 4, a user-friendly SNN library written in C++ that can simulate large biologically detailed neural networks, is released, improving on the efficiency and scalability of earlier releases and adding new features, such as leaky-integrate-and-fire (LIF), 9-parameter Izhikevich, multi-compartment neuron models, and fourth order Runge Kutta integration. Expand
SpykeTorch: Efficient Simulation of Convolutional Spiking Neural Networks With at Most One Spike per Neuron
SpykeTorch, an open-source high-speed simulation framework based on PyTorch that simulates convolutional SNNs with at most one spike per neuron and the rank-order encoding scheme and is highly generic and capable of reproducing the results of various studies. Expand
Larger GPU-accelerated brain simulations with procedural connectivity
Extensions to GeNN are described that enable it to ‘procedurally’ generate connectivity and synaptic weights ‘on the go’ as spikes are triggered, instead of storing and retrieving them from memory, and find that GPUs are well-suited to this approach. Expand
Dynamic parallelism for synaptic updating in GPU-accelerated spiking neural network simulations
This work applies dynamic parallelism for synaptic updating in SNN simulations on a GPU, which eliminates the need to start many parallel applications at each time-step, and the associated lags of data transfer between CPU and GPU memories. Expand
SpikeNET: an event-driven simulation package for modelling large networks of spiking neurons
The underlying computation and implementation of such a mechanism in SpikeNET, the authors' neural network simulation package, is described and the type of model one can build is not only biologically compliant, it is also computationally efficient. Expand
A new GPU library for fast simulation of large-scale networks of spiking neurons
The proposed NeuronGPU library achieves state-of-the-art performance in terms of simulation time per second of biological activity on the simulation of a well-known cortical microcircuit model and on a balanced networks of excitatory and inhibitory neurons, using AdEx neurons and conductance-based synapses. Expand
Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks
The proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity. Expand
Event-Driven Simulation Scheme for Spiking Neural Networks Using Lookup Tables to Characterize Neuronal Dynamics
This work implements and evaluates critically an event-driven algorithm (ED-LUT) that uses precalculated look-up tables to characterize synaptic and neuronal dynamics, and introduces an improved two-stage event-queue algorithm, which allows the simulations to scale efficiently to highly connected networks with arbitrary propagation delays. Expand