• Corpus ID: 8760901

Bayesian Spike-Triggered Covariance Analysis

@inproceedings{Park2011BayesianSC,
  title={Bayesian Spike-Triggered Covariance Analysis},
  author={Il Memming Park and Jonathan W. Pillow},
  booktitle={NIPS},
  year={2011}
}
Neurons typically respond to a restricted number of stimulus features within the high-dimensional space of natural stimuli. Here we describe an explicit model-based interpretation of traditional estimators for a neuron's multi-dimensional feature space, which allows for several important generalizations and extensions. First, we show that traditional estimators based on the spike-triggered average (STA) and spike-triggered covariance (STC) can be formalized in terms of the "expected log… 

Figures from this paper

The Equivalence of Information-Theoretic and Likelihood-Based Methods for Neural Dimensionality Reduction
TLDR
Model-based dimensionality reduction methods for neurons with non-Poisson firing statistics are introduced and framed equivalently in likelihood-based or information-theoretic terms, and it is shown how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model.
Convolutional spike-triggered covariance analysis for neural subunit models
TLDR
This work shows that a "convolutional" decomposition of a spike-triggered average and covariance (STA) matrix provides an asymptotically efficient estimator for class of quadratic subunit models, and establishes theoretical conditions for identifiability of the subunit and pooling weights.
Scalable Bayesian inference for high-dimensional neural receptive fields
TLDR
Novel methods for scaling up automatic smoothness determination (ASD), an empirical Bayesian method for RF estimation, to high-dimensional settings are focused on and a suite of scalable approximate methods that exploit Kronecker and Toeplitz structure in the stimulus autocovariance are introduced.
Spike-triggered covariance: geometric proof, symmetry properties, and extension beyond Gaussian stimuli
TLDR
A geometric proof of consistency is presented, which provides insight into the foundations of the spike-triggered covariance technique, in particular, into the crucial role played by the geometry of stimulus space and symmetries in the stimulus–response relation.
Spectral methods for neural characterization using generalized quadratic models
TLDR
The generalized quadratic model provides a natural framework for combining multi-dimensional stimulus sensitivity and spike-history dependencies within a single model and provides closed-form estimators under a large class of non-Gaussian stimulus distributions.
Estimating smooth and sparse neural receptive fields with a flexible spline basis
TLDR
This work encodes prior knowledge for estimation of STRFs by choosing a set of basis function with the desired properties: a natural cubic spline basis, which is computationally efficient, and can be easily applied to Linear-Gaussian and Linear- nonlinear-Poisson models as well as more complicated Linear-Nonlinear-Linear-Non linear cascade model or spike-triggered clustering methods.
Scaling the Poisson GLM to massive neural datasets through polynomial approximations
TLDR
This work develops highly scalable approximate inference methods for Poisson generalized linear models (GLMs) that require only a single pass over the data and derives closed-form solutions to the approximate maximum likelihood and MAP estimates, posterior distribution, and marginal likelihood.
Models of Neuronal Stimulus-Response Functions: Elaboration, Estimation, and Evaluation
TLDR
This review provides a unifying and critical survey of the techniques that have been brought to bear on this effort thus far—ranging from the classical linear receptive field model to modern approaches incorporating normalization and other nonlinearities.
Characterization of Nonlinear Neuron Responses Mid Year Report
TLDR
This project will examine both linear and nonlinear models, and use both the feature subspace technique and the maximum likelihood technique to fit the parameters of the appropriate model.
Characterization of Nonlinear Neuron Responses
TLDR
This project will examine both linear and nonlinear models, and use both the feature subspace technique and the maximum likelihood technique to fit the parameters of the appropriate model.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 35 REFERENCES
Dimensionality reduction in neural models: an information-theoretic generalization of spike-triggered average and covariance analysis.
TLDR
An information-theoretic framework for fitting neural spike responses with a Linear-Nonlinear-Poisson cascade model that provides an explicit "default" model of the nonlinear stage mapping the filter responses to spike rate, in the form of a ratio of Gaussians.
Bayesian Inference for Spiking Neuron Models with a Sparsity Prior
TLDR
Using the expectation propagation algorithm, the Bayesian treatment of generalized linear models is presented, able to approximate the full posterior distribution over all weights, and the sparsity of the Laplace prior is used to select those filters from a spike-triggered covariance analysis that are most informative about the neural response.
Bayesian Inference for Generalized Linear Models for Spiking Neurons
TLDR
It is shown how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm and found that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate.
Maximum likelihood estimation of cascade point-process neural encoding models
TLDR
This work investigates the shape of the likelihood function for this type of model, gives a simple condition on the nonlinearity ensuring that no non-global local maxima exist in the likelihood—leading to efficient algorithms for the computation of the maximum likelihood estimator—and discusses the implications for the form of the allowed nonlinearities.
Efficient, adaptive estimation of two-dimensional firing rate surfaces via Gaussian process methods
TLDR
This work introduces methods based on Gaussian process nonparametric Bayesian techniques for estimating two-dimensional firing rate maps, and illustrates the method's flexibility and performance on a variety of simulated and real data.
A Generalized Linear Model for Estimating Spectrotemporal Receptive Fields from Responses to Natural Sounds
TLDR
This model is compared to normalized reverse correlation (NRC), the traditional method for STRF estimation, in terms of predictive power and the basic tuning properties of the estimated STRFs, and finds that a GLM with a sparse prior predicts novel responses to both stimulus classes significantly better than NRC.
Convergence Properties of Some Spike-Triggered Analysis Techniques
TLDR
An estimator for the LN model parameters which is designed to converge under general conditions to the correct model is introduced, and the rate of convergence of this estimator is derived.
Convergence properties of three spike-triggered analysis techniques
TLDR
An estimator for the LN model parameters which is designed to converge under general conditions to the correct model is introduced, and the rate of convergence of this estimator is derived and provided.
A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects.
TLDR
A statistical framework based on the point process likelihood function to relate a neuron's spiking probability to three typical covariates: the neuron's own spiking history, concurrent ensemble activity, and extrinsic covariates such as stimuli or behavior.
Analyzing Neural Responses to Natural Signals: Maximally Informative Dimensions
TLDR
A method that allows for a rigorous statistical analysis of neural responses to natural stimuli that are nongaussian and exhibit strong correlations is proposed, which maximize the mutual information between the neural responses and projections of the stimulus onto low-dimensional subspaces.
...
1
2
3
4
...