hxtorch.snn: Machine-learning-inspired Spiking Neural Network Modeling on BrainScaleS-2

@article{Spilger2022hxtorchsnnMS,
  title={hxtorch.snn: Machine-learning-inspired Spiking Neural Network Modeling on BrainScaleS-2},
  author={Philipp Spilger and Elias Arnold and Luca Blessing and Christian Mauch and Christian Pehle and Eric M{\"u}ller and Johannes Schemmel},
  journal={ArXiv},
  year={2022},
  volume={abs/2212.12210}
}
Neuromorphic systems require user-friendly software to support the design and optimization of experiments. In this work, we ad-dressthisneedby presentingour developmentofa machinelearning-based modeling framework for the BrainScaleS-2 neuromorphic system. This work represents an improvement over previous ef-forts, which either focused on the matrix-multiplication mode of BrainScaleS-2 or lacked full automation. Our framework, called hxtorch.snn , enables the hardware-in-the-loop training of… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 34 REFERENCES

hxtorch: PyTorch for BrainScaleS-2 - Perceptrons on Analog Neuromorphic Hardware

This work presents software facilitating the usage of the BrainScaleS-2 analog neuromorphic hardware system as an inference accelerator for artificial neural networks, and presents a model that classifies activities of daily living with smartphone sensor data.

Neuromorphic hardware in the loop: Training a deep spiking network on the BrainScaleS wafer-scale system

This paper demonstrates how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate, and shows that deep spiking networks emulated on analog neuromorphic devices can attain good computational performance despite the inherent variations of the Analog substrate.

The BrainScaleS-2 Accelerated Neuromorphic System With Hybrid Plasticity

The second generation of the BrainScaleS neuromorphic architecture is described, which combines a custom analog accelerator core supporting the accelerated physical emulation of bio-inspired spiking neural network primitives with a tightly coupled digital processor and a digital event-routing network.

BindsNET: A Machine Learning-Oriented Spiking Neural Networks Library in Python

It is argued that this package facilitates the use of spiking networks for large-scale machine learning problems and some simple examples by using BindsNET in practice are shown.

NxTF: An API and Compiler for Deep Spiking Neural Networks on Intel Loihi

NxTF: a programming interface derived from Keras and compiler optimized for mapping deep convolutional SNNs to the multi-core Intel Loihi architecture is developed, and NxTF on Deep Neural Networks trained directly on spikes as well as models converted from traditional DNNs are evaluated.

Inference with Artificial Neural Networks on Analog Neuromorphic Hardware

This paper discusses BrainScaleS-2 as an analog inference accelerator and presents calibration as well as optimization strategies, highlighting the advantages of training with hardware in the loop and classify the MNIST handwritten digits dataset using a two-dimensional convolution and two dense layers.

Surrogate Gradient Learning in Spiking Neural Networks: Bringing the Power of Gradient-based optimization to spiking neural networks

This article elucidates step-by-step the problems typically encountered when training SNNs and guides the reader through the key concepts of synaptic plasticity and data-driven learning in the spiking setting as well as introducing surrogate gradient methods, specifically, as a particularly flexible and efficient method to overcome the aforementioned challenges.

Convolutional networks for fast, energy-efficient neuromorphic computing

This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

A Scalable Approach to Modeling on Accelerated Neuromorphic Hardware

This work presents the software aspects of this endeavor for the BrainScaleS-2 system, a hybrid accelerated neuromorphic hardware architecture based on physical modeling, and introduces key aspects of the Brain scale-2 Operating System: experiment workflow, API layering, software design, and platform operation.