Strictly Positive-Definite Spike Train Kernels for Point-Process Divergences

@article{Park2012StrictlyPS,
  title={Strictly Positive-Definite Spike Train Kernels for Point-Process Divergences},
  author={Il Memming Park and Sohan Seth and Murali Rao and Jos{\'e} Carlos Pr{\'i}ncipe},
  journal={Neural Computation},
  year={2012},
  volume={24},
  pages={2223-2250}
}
Exploratory tools that are sensitive to arbitrary statistical variations in spike train observations open up the possibility of novel neuroscientific discoveries. Developing such tools, however, is difficult due to the lack of Euclidean structure of the spike train space, and an experimenter usually prefers simpler tools that capture only limited statistical features of the spike train, such as mean spike count or mean firing rate. We explore strictly positive-definite kernels on the space of… 
Parallel spike trains analysis using positive definite kernels
TLDR
A positive definite kernels on spike trains obtained from a single neuron is defined by extending the memoryless cross intensity kernel (mCI kernel) by using a linear combination of cross-neuron interactions, and the result indicated that the kernel indirectly represents some of the internal parameters of the neural networks.
Kernel Methods on Spike Train Space for Neuroscience: A Tutorial
TLDR
This tutorial illustrates why kernel methods can change the way spike trains are analyzed and processed and provides a detailed overview of the current state of the art and future challenges.
Improved Similarity Measures for Small Sets of Spike Trains
TLDR
Improved spike train measures can be successfully used for fitting neuron models to experimental data, for comparisons of spike trains, and classification of spike train data, and it is demonstrated that when similarity measures are used forfitting mathematical models, all previous methods systematically underestimate the noise.
Rescuing neural spike train models from bad MLE
TLDR
This work develops a method that stochastically optimizes the maximum mean discrepancy induced by the kernel that can control the trade-off between different features which is critical for dealing with model-mismatch.
Multivariate Multiscale Analysis of Neural Spike Trains
TLDR
This dissertation introduces new methodologies for the analysis of neural spike trains by combining the multiscale method of Kolaczyk and Nowak (2004) with the periodic models of Bickel et al. (2007, 2008); Shao and Lii (2011), and introducing the Skellam Process with Resetting (SPR).
Biologically-Inspired Spike-Based Automatic Speech Recognition of Isolated Digits Over a Reproducing Kernel Hilbert Space
TLDR
This novel framework for quantifying time-series structure in spoken words using spikes can outperform both traditional hidden Markov model (HMM) speech processing as well as neuromorphic implementations based on spiking neural network (SNN), yielding accurate and ultra-low power word spotters.
Supervised Learning Algorithm for Spiking Neurons Based on Nonlinear Inner Products of Spike Trains
TLDR
This paper presents a new supervised, multi-spike learning algorithm for spiking neurons, which can implement the complex spatio-temporal pattern learning of spike trains and defines nonlinear inner products operators to mathematically describe and manipulate spike trains.
Connectivity estimation of neural networks using a spike train kernel
TLDR
A novel method that estimates the underlying connectivity of a given neural network based on a similarity measure applied to spike trains is introduced, which uses a normalized positive definite kernel defined on spike trains to estimate network connectivity.
Neural Decoding with Kernel-Based Metric Learning
TLDR
This work poses the problem of optimizing multineuron metrics and other metrics using centered alignment, a kernel-based dependence measure, and shows that the optimized metrics highlight the distinguishing dimensions of the neural response, significantly increase the decoding accuracy, and improve nonlinear dimensionality reduction methods for exploratory neural analysis.
...
1
2
3
...

References

SHOWING 1-10 OF 65 REFERENCES
Improved Similarity Measures for Small Sets of Spike Trains
TLDR
It is found that it is possible to modify some of the existing similarity measures by taking into account the variance of the measure across spike trains from the same set, and that without sample bias compensation the similarity of real neurons with spiking neuron models having low stochasticity will be overrated.
A Reproducing Kernel Hilbert Space Framework for Spike Train Signal Processing
TLDR
This letter presents a general framework based on reproducing kernel Hilbert spaces (RKHS) to mathematically describe and manipulate spike trains to allow spike train signal processing from basic principles while incorporating their statistical description as point processes.
Kernel Methods on Spike Train Space for Neuroscience: A Tutorial
TLDR
This tutorial illustrates why kernel methods can change the way spike trains are analyzed and processed and provides a detailed overview of the current state of the art and future challenges.
Linking non-binned spike train kernels to several existing spike train metrics
Spikernels: Predicting Arm Movements by Embedding Population Spike Rate Patterns in Inner-Product Spaces
TLDR
The merits of the modeling approach are demonstrated by comparing the Spikernel to various standard kernels in the task of predicting hand movement velocities from cortical recordings and all of the kernels outperform the standard scalar product used in linear regression.
Metric-space analysis of spike trains: theory, algorithms and application
TLDR
The mathematical basis of a new approach to the analysis of temporal coding is the construction of several families of novel distances (metrics) between neuronal impulse trains that formalize physiologically based hypotheses for those aspects of the firing pattern that might be stimulus dependent and make essential use of the point-process nature of neural discharges.
Measurement of variability dynamics in cortical spike trains
A novel family of non-parametric cumulative based divergences for point processes
TLDR
This paper extends the traditional Kolmogorov-Smirnov and Cramer-von-Mises tests to the space of spike trains via stratification, and shows that these statistics can be consistently estimated from data without any free parameter.
An adaptive decoder from spike trains to micro-stimulation using kernel least-mean-squares (KLMS)
  • Lin Li, I. Park, J. Príncipe
  • Computer Science
    2011 IEEE International Workshop on Machine Learning for Signal Processing
  • 2011
TLDR
This paper proposes a nonlinear adaptive decoder for somatosensory micro-stimulation based on the kernel least mean square (KLMS) algorithm applied directly on the space of spike trains, and transforms the vector of spike times into a function in reproducing kernel Hilbert space (RKHS).
...
1
2
3
4
5
...