Estimating Information Rates with Confidence Intervals in Neural Spike Trains

Abstract

Information theory provides a natural set of statistics to quantify the amount of knowledge a neuron conveys about a stimulus. A related work (Kennel, Shlens, Abarbanel, & Chichilnisky, 2005) demonstrated how to reliably estimate, with a Bayesian confidence interval, the entropy rate from a discrete, observed time series. We extend this method to measure the rate of novel information that a neural spike train encodes about a stimulus--the average and specific mutual information rates. Our estimator makes few assumptions about the underlying neural dynamics, shows excellent performance in experimentally relevant regimes, and uniquely provides confidence intervals bounding the range of information rates compatible with the observed spike train. We validate this estimator with simulations of spike trains and highlight how stimulus parameters affect its convergence in bias and variance. Finally, we apply these ideas to a recording from a guinea pig retinal ganglion cell and compare results to a simple linear decoder.

DOI: 10.1162/neco.2007.19.7.1683

Extracted Key Phrases

10 Figures and Tables

Statistics

01020'06'07'08'09'10'11'12'13'14'15'16'17
Citations per Year

71 Citations

Semantic Scholar estimates that this publication has 71 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Shlens2007EstimatingIR, title={Estimating Information Rates with Confidence Intervals in Neural Spike Trains}, author={Jonathon Shlens and Matthew B. Kennel and Henry D. I. Abarbanel and E. J. Chichilnisky}, journal={Neural computation}, year={2007}, volume={19 7}, pages={1683-719} }