Andreas Grübl

Learn More
Modeling neural tissue is an important tool to investigate biological neural networks. Until recently, most of this modeling has been done using numerical methods. In the European research project ”FACETS” this computational approach is complemented by different kinds of neuromorphic systems. A special emphasis lies in the usability of these systems for(More)
This paper describes an area-efficient mixed-signal implementation of synapse-based long term plasticity realized in a VLSI model of a spiking neural network. The artificial synapses are based on an implementation of spike time dependent plasticity (STDP). In the biological specimen, STDP is a mechanism acting locally in each synapse. The presented(More)
In this article, we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from(More)
In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been(More)
We describe an accelerated hardware neuron being capable of emulating the adaptive exponential integrate-and-fire neuron model. Firing patterns of the membrane stimulated by a step current are analyzed in transistor level simulations and in silicon on a prototype chip. The neuron is destined to be the hardware neuron of a highly integrated wafer-scale(More)
In this paper, we present a system architecture currently under development that will allow very large (>10 neurons, >10 synapses) reconfigurable networks to be built, in the form of interlinked dies on a single wafer. Reconfigurable routing and complex adaptation/plasticity across several timescales in neurons and synapses allow for the implementation of(More)
This paper presents a platform for the parallel operation of VLSI neural networks allowing to seamlessly map neural network topologies on distributed resources. The scalable approach provides fast isochronous communication channels transporting the neuron signals between single network modules. The network modules are printed circuit boards hosting a(More)
We present results from a new approach to learning and plasticity in neuromorphic hardware systems: to enable flexibility in implementable learning mechanisms while keeping high efficiency associated with neuromorphic implementations, we combine a general-purpose processor with full-custom analog elements. This processor is operating in parallel with a(More)
Johannes Schemmel, Andreas Grübl, Alexander Kononov, Karlheinz Meier, Sebastian Millner, Marc-Olivier Schwartz Kirchhoff Institute for Physics University of Heidelberg, Heidelberg, Germany Email: schemmel@kip.uni-heidelberg.de Stefan Scholze, Stefan Schiefer, Stephan Hartmann, Johannes Partzsch, Christian Mayr, Rene Schüffny Chair of Highly-Parallel(More)
This paper presents a network architecture to interconnect mixed-signal VLSI integrate-and-fire neural networks in a way that the timing of the neural network data is preserved. The architecture uses isochronous connections to reserve network bandwidth and is optimized for the small data event packets that have to be exchanged in spiking hardware neural(More)