What Causes a Neuron to Spike?

@article{Arcas2003WhatCA,
  title={What Causes a Neuron to Spike?},
  author={Blaise Ag{\"u}era y Arcas and Adrienne L. Fairhall},
  journal={Neural Computation},
  year={2003},
  volume={15},
  pages={1789-1807}
}
The computation performed by a neuron can be formulated as a combination of dimensional reduction in stimulus space and the nonlinearity inherent in a spiking output. White noise stimulus and reverse correlation (the spike-triggered average and spike-triggered covariance) are often used in experimental neuroscience to ask neurons which dimensions in stimulus space they are sensitive to and to characterize the nonlinearity of the response. In this article, we apply reverse correlation to the… 

Computation in a Single Neuron: Hodgkin and Huxley Revisited

This work applies the reverse correlation technique with white noise input and information theory to analyze the simplest biophysically realistic model neuron, the Hodgkin-Huxley (HH) model, and finds that an even better approximation is to describe the relevant subspace as two dimensional but curved; in this way it can capture 90 of the mutual information even at high time resolution.

Bayesian Population Decoding of Spiking Neurons

The timing of action potentials in spiking neurons depends on the temporal dynamics of their inputs and contains information about temporal fluctuations in the stimulus. Leaky integrate-and-fire

Enhanced Sensitivity to Rapid Input Fluctuations by Nonlinear Threshold Dynamics in Neocortical Pyramidal Neurons

The results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations.

From Spiking Neuron Models to Linear-Nonlinear Models

It is found that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space, and an adaptive timescale rate model is introduced in which the timescale of the linear filter depends on the instantaneous firing rate.

Spiking Neurons and Synaptic Stimuli: Determining the Fidelity of Coincidence-Factor in Neural Response Comparison

It is observed that the difference between the lower-bound limit of faithful behaviour and the reference inter-spike interval (ISI) reduces with the increase in the ISI of the input spike train, indicating that spike trains generated by two highly-varying currents have a high coincidence factor thus indicating higher similarity.

Adaptive probabilistic neural coding from deterministic spiking neurons: analysis from first principles

The goal of this work is to connect the mechanistic, biophysical approach to neuronal function to a description in terms of a coding model, and develops from first principles a mathematical theory mapping the relationships between two simple but powerful models: deterministic integrate-and-fire dynamical models and linear-nonlinear coding models.

Prediction and Decoding of Retinal Ganglion Cell Responses with a Probabilistic Spiking Model

The fitted model predicts the detailed time structure of responses to novel stimuli, accurately capturing the interaction between the spiking history and sensory stimulus selectivity, and can be used to derive an explicit, maximum-likelihood decoding rule for neural spike trains.

Single Neuron Computation: From Dynamical System to Feature Detector

The output of a white noise analysis of a simple dynamical model neuron is drawn and it is found that under certain assumptions, the form of the relevant features is well defined by the parameters of the dynamical system.
...

References

SHOWING 1-10 OF 27 REFERENCES

Computation in a Single Neuron: Hodgkin and Huxley Revisited

This work applies the reverse correlation technique with white noise input and information theory to analyze the simplest biophysically realistic model neuron, the Hodgkin-Huxley (HH) model, and finds that an even better approximation is to describe the relevant subspace as two dimensional but curved; in this way it can capture 90 of the mutual information even at high time resolution.

Noise Adaptation in Integrate-and-Fire Neurons

Analytical methods are used to prove that nonleaky integrate-and-fire neurons totally adapt to any constant input noise level, in the sense that their asymptotic spiking rates are independent of the magnitude of their input noise.

Reduction of the Hodgkin-Huxley Equations to a Single-Variable Threshold Model

The results show that the description of a neuron as a threshold element can indeed be justified and the four-dimensional neuron model of Hodgkin and Huxley as a concrete example is studied.

Spikes: Exploring the Neural Code

Spikes provides a self-contained review of relevant concepts in information theory and statistical decision theory about the representation of sensory signals in neural spike trains and a quantitative framework is used to pose precise questions about the structure of the neural code.

RELIABILITY AND STATISTICAL EFFICIENCY OF A BLOWFLY MOVEMENT-SENSITIVE NEURON

Through model-independent methods for characterizing the reliability of neural spike trains in response to brief stimuli, the discriminability of similar stimuli based on the real-time response of a single neuron is measured in much the same way as modern psychophysical techniques.

Spike initiation by transmembrane current: a white‐noise analysis.

The average first and second derivatives of spike‐evoking epochs revealed that current slope and acceleration were most crucial in the last 200 msec before spike triggering, and that these dynamic stimulus components were more important for a cell maintained under a depolarizing, rather than a hyperpolarizing bias.

Synergy in a Neural Code

We show that the information carried by compound events in neural spike trainspatterns of spikes across time or across a population of cellscan be measured, independent of assumptions about what

Information transmission and recovery in neural communication channels revisited.

  • P. Tiesinga
  • Computer Science, Biology
    Physical review. E, Statistical, nonlinear, and soft matter physics
  • 2001
It is found that single interspike intervals from the first neuron, at a resolution of 0.5 time units, contain more information about the input signal than those of the second neuron, and the data processing inequality is not violated.

Information transmission and recovery in neural communications channels.

The results indicate how nonlinear neurons acting as input/output systems along a communications channel can recover information apparently "lost" in earlier junctions on the channel, and suggest a framework in which one might understand the apparent design complexity of neural information transduction networks.