A new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks, based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry.Expand

The unified technique that is introduced here, referred to as the shifting strategy, is applicable to numerous geometric covering and packing problems and how it varies with problem parameters is illustrated.Expand

It is shown that networks of spiking neurons are, with regard to the number of neurons that are needed, computationally more powerful than these other neural network models based on McCulloch Pitts neurons, respectively, sigmoidal gates.Expand

Recent theoretical and experimental work suggests that spatiotemporal processing emerges from the interaction between incoming stimuli and the internal dynamic state of neural networks, including not only their ongoing spiking activity but also their 'hidden' neuronal states, such as short-term synaptic plasticity.Expand

A neural network model is proposed and it is shown by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time.Expand

The results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes.Expand

This article finds that the edge of chaos predicts quite well those values of circuit parameters that yield maximal computational performance, but obviously it makes no prediction of their computational performance for other parameter values, and proposes a new method for predicting the computational performance of neural microcircuit models.Expand

The resulting learning theory predicts that even difficult credit-assignment problems can be solved in a self-organizing manner through reward-modulated STDP, and provides a possible functional explanation for trial-to-trial variability, which is characteristic for cortical networks of neurons but has no analogue in currently existing artificial computing systems.Expand

It is demonstrated through extensive computer simulations that the theoretically predicted convergence of STDP with teacher forcing also holds for more realistic models for neurons, dynamic synapses, and more general input distributions.Expand

This work examines a powerful model of parallel computation: polynomial size threshold circuits of bounded depth (the gates compute threshold functions withPolynomial weights), and considers circuits of unreliable threshold gates, circuits of imprecise threshold gates and threshold quantifiers.Expand