What Can a Neuron Learn with Spike-Timing-Dependent Plasticity?

@article{Legenstein2005WhatCA,
  title={What Can a Neuron Learn with Spike-Timing-Dependent Plasticity?},
  author={Robert A. Legenstein and Christian Naeger and Wolfgang Maass},
  journal={Neural Computation},
  year={2005},
  volume={17},
  pages={2337-2382}
}
Spiking neurons are very flexible computational modules, which can implement with different values of their adjustable synaptic parameters an enormous variety of different transformations F from input spike trains to output spike trains. We examine in this letter the question to what extent a spiking neuron with biologically realistic models for dynamic synapses can be taught via spike-timing-dependent plasticity (STDP) to implement a given transformation F. We consider a supervised learning… Expand
Perceptron learning rule derived from spike-frequency adaptation and spike-time-dependent plasticity
TLDR
It is shown that synaptic spike-time-dependent plasticity (STDP) combined with spike-frequency adaptation (SFA) in a single neuron together approximate the well-known perceptron learning rule. Expand
Reinforcement learning with modulated spike timing dependent synaptic plasticity.
TLDR
This model offers a novel and biologically plausible implementation of reinforcement learning that is capable of training a neural population to produce a very wide range of possible mappings between synaptic input and spiking output. Expand
Supervised Learning in Spiking Neural Networks with ReSuMe: Sequence Learning, Classification, and Spike Shifting
TLDR
A model of supervised learning for biologically plausible neurons is presented that enables spiking neurons to reproduce arbitrary template spike patterns in response to given synaptic stimuli even in the presence of various sources of noise and shows that the learning rule can also be used for decision-making tasks. Expand
Unsupervised Learning of Precise Spike Times with Membrane Potential Dependent Synaptic Plasticity
TLDR
A simple unsupervised synaptic plasticity mechanism that depends on the postsynaptic membrane potential and overcomes shortcomings of previous rules is proposed that achieves a surprisingly high storage capacity for spike associations, with robust memory retrieval even in the presence of input activity corrupted by noise. Expand
Perfect Associative Learning with Spike-Timing-Dependent Plasticity
TLDR
It is proved that spike-timing-dependent plasticity having an anti-Hebbian form for excitatory synapses as well as a spike- Timing- dependent plasticity of Hebbian shape for inhibitory synapses are sufficient for realizing the original Perceptron Learning Rule if these respective plasticity mechanisms act in concert with the hyperpolarisation of post-synaptic neurons. Expand
The tempotron: a neuron that learns spike timing–based decisions
TLDR
This work proposes a new, biologically plausible supervised synaptic learning rule that enables neurons to efficiently learn a broad range of decision rules, even when information is embedded in the spatiotemporal structure of spike patterns rather than in mean firing rates. Expand
Spike-Timing Error Backpropagation in Theta Neuron Networks
TLDR
The derivation of a steepest gradient descent learning rule for a multilayer network of theta neurons, a one-dimensional nonlinear neuron model, shows that it is possible to perform complex computations by applying supervised learning techniques to the spike times and time response properties of nonlinear integrate and fire neurons. Expand
Reinforcement Learning Through Modulation of Spike-Timing-Dependent Synaptic Plasticity
  • R. Florian
  • Psychology, Computer Science
  • Neural Computation
  • 2007
TLDR
It is shown that the modulation of STDP by a global reward signal leads to reinforcement learning, and analytically learning rules involving reward-modulated spike-timing-dependent synaptic and intrinsic plasticity are derived, which may be used for training generic artificial spiking neural networks, regardless of the neural model used. Expand
Real-Time Classification of Complex Patterns Using Spike-Based Learning in Neuromorphic VLSI
TLDR
Experimental data is presented that demonstrate how the VLSI neural network can learn to classify patterns of neural activities, also in the case in which they are highly correlated. Expand
Learning spatio-temporal spike train encodings with ReSuMe, DelReSuMe, and Reward-modulated Spike-timing Dependent Plasticity in Spiking Neural Networks
TLDR
This work proposes alternative architecture for ReSuMe dealing with heterogeneous synapses and uses dopamine-inspired STDP in SNNs to demonstrate improvements in mapping spatio-temporal patterns of spike trains with the multi-delay mechanism versus single connection. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 47 REFERENCES
The tempotron: a neuron that learns spike timing–based decisions
TLDR
This work proposes a new, biologically plausible supervised synaptic learning rule that enables neurons to efficiently learn a broad range of decision rules, even when information is embedded in the spatiotemporal structure of spike patterns rather than in mean firing rates. Expand
Modeling Synaptic Plasticity in Conjunction with the Timing of Pre- and Postsynaptic Action Potentials
TLDR
A model for synaptic long-term plasticity that relies on the relative timing of pre- and post-synaptic action potentials and is able to strengthen those input synapses that convey precisely timed spikes at the expense ofsynapses that deliver spikes with a broad temporal distribution is developed. Expand
Optimal Spike-Timing-Dependent Plasticity for Precise Action Potential Firing in Supervised Learning
TLDR
A supervised learning paradigm is used to derive a synaptic update rule that optimizes by gradient ascent the likelihood of postsynaptic firing at one or several desired firing times, and finds that the optimal strategy of up- and downregulating synaptic efficacies depends on the relative timing between presynaptic spike arrival and desired post Synaptic firing. Expand
Intrinsic Stabilization of Output Rates by Spike-Based Hebbian Learning
TLDR
It is shown that plasticity can lead to an intrinsic stabilization of the mean firing rate of the postsynaptic neuron and that Hebbian and anti-Hebbian rules are questionable since learning is driven by correlations on the timescale of the learning window. Expand
Learning Only When Necessary: Better Memories of Correlated Patterns in Networks with Bounded Synapses
TLDR
It is proved in the form of a generalized perceptron convergence theorem that under these constraints, a neuron learns to classify any linearly separable set of patterns, including a wide class of highly correlated random patterns. Expand
Reducing the Variability of Neural Responses: A Computational Theory of Spike-Timing-Dependent Plasticity
TLDR
This work simulates neurophysiological experiments and obtains the characteristic STDP curve, comparing the account to other efforts to derive STDP from computational principles and arguing that it provides the most comprehensive coverage of the phenomena. Expand
Why spikes? Hebbian learning and retrieval of time-resolved excitation patterns
TLDR
This work introduces and analyzes a model of spiking neurons, the spike response model, with a realistic distribution of axonal delays and with realistic postsynaptic potentials, and shows that all information about the spike pattern is lost if only mean firing rates or ensemble activities are considered. Expand
Hebbian learning and spiking neurons
A correlation-based ~‘‘Hebbian’’ ! learning rule at a spike level with millisecond resolution is formulated, mathematically analyzed, and compared with learning in a firing-rate description. TheExpand
Neural populations can induce reliable postsynaptic currents without observable spike rate changes or precise spike timing.
TLDR
Computational methods are employed to show that an ensemble of neurons firing at a constant mean rate can induce arbitrarily chosen temporal current patterns in postsynaptic cells, and speculate as to how this capability may underlie an extension of population coding to the temporal domain. Expand
Learning Input Correlations through Nonlinear Temporally Asymmetric Hebbian Plasticity
TLDR
It is demonstrated that by adjusting the weight dependence of the synaptic changes in TAH plasticity, it is possible to enhance the synaptic representation of temporal input correlations while maintaining the system in a stable learning regime, and the learning efficiency can be optimized. Expand
...
1
2
3
4
5
...