Neuromorphic hardware in the loop: Training a deep spiking network on the BrainScaleS wafer-scale system

@article{Schmitt2017NeuromorphicHI,
  title={Neuromorphic hardware in the loop: Training a deep spiking network on the BrainScaleS wafer-scale system},
  author={Sebastian Schmitt and Johann Klaehn and Guillaume Bellec and Andreas Gr{\"u}bl and Maurice Guettler and Andreas Hartel and Stephan Hartmann and Dan Husmann de Oliveira and Kai Husmann and Vitali Karasenko and Mitja Kleider and Christoph Koke and Christian Mauch and Eric M{\"u}ller and Paul M{\"u}ller and Johannes Partzsch and Mihai A. Petrovici and Stefan Schiefer and Stefan Scholze and Bernhard Vogginger and Robert A. Legenstein and Wolfgang Maass and Christian Mayr and Johannes Schemmel and Karlheinz Meier},
  journal={2017 International Joint Conference on Neural Networks (IJCNN)},
  year={2017},
  pages={2227-2234}
}
Emulating spiking neural networks on analog neuromorphic hardware offers several advantages over simulating them on conventional computers, particularly in terms of speed and energy consumption. However, this usually comes at the cost of reduced control over the dynamics of the emulated networks. In this paper, we demonstrate how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate. We first convert a deep neural network trained in… 

Benchmarking Deep Spiking Neural Networks on Neuromorphic Hardware

The methodology of converting pre-trained non-spiking to spiking neural networks is used to evaluate the performance loss and measure the energy-per-inference for three neuromorphic hardware systems and common simulation frameworks for CPU (NEST) and CPU/GPU (GeNN).

Training spiking multi-layer networks with surrogate gradients on an analog neuromorphic substrate

This work developed a hardware-in-the-loop strategy to train multi-layer spiking networks using surrogate gradients on the analog BrainScales-2 chip and demonstrates low-energy spiking network processing on an analog neuromorphic substrate and sets several new benchmarks for hardware systems in terms of classification accuracy, processing speed, and efficiency.

Neuromorphic Algorithm-hardware Codesign for Temporal Pattern Learning

This work derives an efficient training algorithm for Leaky Integrate and Fire neurons, which is capable of training a SNN to learn complex spatial temporal patterns, and develops a CMOS circuit implementation for a memristor-based network of neuron and synapses which retains critical neural dynamics with reduced complexity.

Accelerating spiking neural network training

This work proposes a new technique for directly training single-spike-per-neuron SNNs which eliminates all sequential computation and relies exclusively on vectorised operations and manages to solve certain tasks with over a 95.68% reduction in spike counts relative to a conventionally trained SNN.

Binary Associative Memories as a Benchmark for Spiking Neuromorphic Hardware

A scalable benchmark based on a spiking neural network implementation of the binary neural associative memory that allows to test the quality of the neuron model implementation, and to explain significant deviations from the expected reference output is described.

Neuromorphic Architecture for Small-Scale Neocortical Network Emulation

A neuromorphic platform that can emulate a small-scale cortical network with diverse types of neurons and synapses found in cortical circuits and configurable long- and short- term dynamic synapses that can provide inhibition, excitation, weight depressing and facilitating and spike-time dependent plasticity (STDP) dynamics is presented.

Surrogate gradients for analog neuromorphic computing

This work presents a learning framework resulting in bioinspired spiking neural networks with high performance, low inference latency, and sparse spike-coding schemes, which self-corrects for device mismatch, and demonstrates surrogate gradient learning on the BrainScaleS-2 analog neuromorphic system using an in-the-loop approach.

An FPGA Implementation of Deep Spiking Neural Networks for Low-Power and Fast Classification

A hardware architecture to enable efficient implementation of SNNs with new spiking max-pooling method to reduce computation complexity and approaches based on shift register and coarsely grained parallels to accelerate convolution operation are proposed.
...

References

SHOWING 1-10 OF 36 REFERENCES

Demonstrating Hybrid Learning in a Flexible Neuromorphic Hardware System

To enable flexibility in implementable learning mechanisms while keeping high efficiency associated with neuromorphic implementations, a general-purpose processor with full-custom analog elements is presented that will enable flexible and efficient learning as a platform for neuroscientific research and technological applications.

Six Networks on a Universal Neuromorphic Computing Substrate

This study presents a highly configurable neuromorphic computing substrate and uses it for emulating several types of neural networks, including a mixed-signal chip, which has been explicitly designed as a universal neural network emulator.

A wafer-scale neuromorphic hardware system for large-scale neural modeling

An integrated software/hardware framework has been developed which is centered around a unified neural system description language, called PyNN, that allows the scientist to describe a model and execute it in a transparent fashion on either a neuromorphic hardware system or a numerical simulator.

Convolutional networks for fast, energy-efficient neuromorphic computing

This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

A pulse communication flow ready for accelerated neuromorphic experiments

The capability of such a system to provide accurate long-term stimulation for emulated spiking networks and tracing of their activity is characterized and the implementation meets the needs of learning experiments, which is an important issue for state-of-the-art neuromorphic systems.

Characterization and Compensation of Network-Level Anomalies in Mixed-Signal Neuromorphic Modeling Platforms

This article provides a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks and suggests generic compensation mechanisms for coping with inevitable distortion mechanisms.

A Convolutional Neural Network Tolerant of Synaptic Faults for Low-Power Analog Hardware

A chip-in-the-loop version of the iterative Perceptron rule is introduced for training the output layer, and influences of various types of errors are thoroughly investigated for all network layers, using the MNIST database of hand-written digits as a benchmark.

Fast Sigmoidal Networks via Spiking Neurons

  • W. Maass
  • Computer Science
    Neural Computation
  • 1997
It is shown that networks of relatively realistic mathematical models for biological neurons in principle can simulate arbitrary feedforward sigmoidal neural nets in a way that has previously not been considered and are universal approximators in the sense that they can approximate with regard to temporal coding any given continuous function of several variables.

PyNN: A Common Interface for Neuronal Network Simulators

PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools.

Real-time classification and sensor fusion with a spiking deep belief network

This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation and shows that the system can be biased to select the correct digit from otherwise ambiguous input.