• Publications
  • Influence
Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations
TLDR
A new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks, based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry. Expand
Approximation schemes for covering and packing problems in image processing and VLSI
TLDR
The unified technique that is introduced here, referred to as the shifting strategy, is applicable to numerous geometric covering and packing problems and how it varies with problem parameters is illustrated. Expand
Networks of Spiking Neurons: The Third Generation of Neural Network Models
  • W. Maass
  • Computer Science
  • Electron. Colloquium Comput. Complex.
  • 1996
TLDR
It is shown that networks of spiking neurons are, with regard to the number of neurons that are needed, computationally more powerful than these other neural network models based on McCulloch Pitts neurons, respectively, sigmoidal gates. Expand
State-dependent computations: spatiotemporal processing in cortical networks
TLDR
Recent theoretical and experimental work suggests that spatiotemporal processing emerges from the interaction between incoming stimuli and the internal dynamic state of neural networks, including not only their ongoing spiking activity but also their 'hidden' neuronal states, such as short-term synaptic plasticity. Expand
Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons
TLDR
A neural network model is proposed and it is shown by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. Expand
Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity
TLDR
The results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Expand
Edge of chaos and prediction of computational performance for neural circuit models
TLDR
This article finds that the edge of chaos predicts quite well those values of circuit parameters that yield maximal computational performance, but obviously it makes no prediction of their computational performance for other parameter values, and proposes a new method for predicting the computational performance of neural microcircuit models. Expand
A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback
TLDR
The resulting learning theory predicts that even difficult credit-assignment problems can be solved in a self-organizing manner through reward-modulated STDP, and provides a possible functional explanation for trial-to-trial variability, which is characteristic for cortical networks of neurons but has no analogue in currently existing artificial computing systems. Expand
What Can a Neuron Learn with Spike-Timing-Dependent Plasticity?
TLDR
It is demonstrated through extensive computer simulations that the theoretically predicted convergence of STDP with teacher forcing also holds for more realistic models for neurons, dynamic synapses, and more general input distributions. Expand
Threshold circuits of bounded depth
TLDR
This work examines a powerful model of parallel computation: polynomial size threshold circuits of bounded depth (the gates compute threshold functions withPolynomial weights), and considers circuits of unreliable threshold gates, circuits of imprecise threshold gates and threshold quantifiers. Expand
...
1
2
3
4
5
...