• Corpus ID: 5298478

Linear dynamical neural population models through nonlinear embeddings

@inproceedings{Gao2016LinearDN,
  title={Linear dynamical neural population models through nonlinear embeddings},
  author={Yuanjun Gao and Evan Archer and Liam Paninski and John P. Cunningham},
  booktitle={NIPS},
  year={2016}
}
A body of recent work in modeling neural activity focuses on recovering low-dimensional latent features that capture the statistical structure of large-scale neural populations. Most such approaches have focused on linear generative models, where inference is computationally tractable. Here, we propose fLDS, a general class of nonlinear generative models that permits the firing rate of each neuron to vary as an arbitrary smooth function of a latent, linear dynamical state. This extra… 

Figures and Tables from this paper

Neural Dynamics Discovery via Gaussian Process Recurrent Neural Networks
TLDR
This paper proposes a novel latent dynamic model that is capable of capturing nonlinear, non-Markovian, long short-term time-dependent dynamics via recurrent neural networks and tackling complex nonlinear embedding via non-parametric Gaussian process.
Bayesian Inference in High-Dimensional Time-Serieswith the Orthogonal Stochastic Linear Mixing Model
TLDR
A new regression framework, the Orthogonal stochastic linear mixing model (OSLMM) is proposed that introduces an orthogonal constraint amongst the mixing coefficients to reduce the computational burden of inference while retaining the capability to handle complex output dependence.
Neural field models for latent state inference: Application to large-scale neuronal recordings
TLDR
It is shown that classical neural field approaches can yield latent statespace equations and inference is demonstrated for a neural field model of excitatory spatiotemporal waves that emerge in the developing retina.
Characterizing the nonlinear structure of shared variability in cortical neuron populations using latent variable models
TLDR
It is demonstrated how nonlinear latent variable models can be used to describe population variability, and it is suggested that a range of methods is necessary to study different brain regions under different experimental conditions.
Structured Inference Networks for Nonlinear State Space Models
TLDR
A unified algorithm is introduced to efficiently learn a broad class of linear and non-linear state space models, including variants where the emission and transition distributions are modeled by deep neural networks.
Recurrent Switching Dynamical Systems Models for Multiple Interacting Neural Populations
TLDR
Recurrent switching linear dynamical systems models for multiple populations where each high-dimensional neural population is represented by a unique set of latent variables, which evolve dynamically in time, allow the nature of these interactions to change over time by using a discrete set of dynamical states.
Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-VAE
The ability to record activities from hundreds of neurons simultaneously in the brain has placed an increasing demand for developing appropriate statistical techniques to analyze such data. Recently,
Scalable Bayesian GPFA with automatic relevance determination and discrete noise models
TLDR
A fully Bayesian yet scalable version of Gaussian process factor analysis (bGPFA) is developed which models neural data as arising from a set of inferred latent processes with a prior that encourages smoothness over time and introduces a novel variational inference strategy that scales near-linearly in time.
Probing variability in a cognitive map using manifold inference from neural dynamics
TLDR
A conceptual model to explain variability in terms of underlying, population-level structure in single-trial neural activity is introduced and suggests that trial-to-trial variability in the hippocampus is structured, and may reflect the operation of internal cognitive processes.
Mesoscopic modeling of hidden spiking neurons
TLDR
It is shown, on synthetic spike trains, that a few observed neurons are sufficient for neuLVM to perform model inversion of large SNNs, in the sense that it can recover connectivity parameters, infer single-trial latent population activity, reproduce ongoing metastable dynamics, and generalize when subjected to perturbations mimicking photo-stimulation.
...
...

References

SHOWING 1-10 OF 30 REFERENCES
Robust learning of low-dimensional dynamics from large neural ensembles
TLDR
This work shows on model data that the parameters of latent linear dynamical systems can be recovered, and that even if the dynamics are not stationary the authors can still recover the true latent subspace, and demonstrates an extension of nuclear norm minimization that can separate sparse local connections from global latent dynamics.
Variational Latent Gaussian Process for Recovering Single-Trial Dynamics from Population Spike Trains
TLDR
The variational latent gaussian process (vLGP) is proposed, a practical and efficient inference method that combines a generative model with a history-dependent point process observation, together with a smoothness prior on the latent trajectories to reveal hidden neural dynamics from large-scale neural recordings.
Low-dimensional models of neural population activity in sensory cortical circuits
TLDR
A statistical model of neural population activity that integrates a nonlinear receptive field model with a latent dynamical model of ongoing cortical activity that captures temporal dynamics and correlations due to shared stimulus drive as well as common noise is introduced.
Empirical models of spiking in neural populations
TLDR
This work argues that in the cortex, where firing exhibits extensive correlations in both time and space and where a typical sample of neurons still reflects only a very small fraction of the local population, the most appropriate model captures shared variability by a low-dimensional latent process evolving with smooth dynamics, rather than by putative direct coupling.
BLACK BOX VARIATIONAL INFERENCE FOR STATE SPACE MODELS
TLDR
A structured Gaussian variational approximate posterior is proposed that carries the same intuition as the standard Kalman filter-smoother but permits us to use the same inference approach to approximate the posterior of much more general, nonlinear latent variable generative models.
High-dimensional neural spike train analysis with generalized count linear dynamical systems
TLDR
The generalized count linear dynamical system is developed, which relaxes the Poisson assumption by using a more general exponential family for count data and can be tractably learned by extending recent advances in variational inference techniques.
Clustered factor analysis of multineuronal spike data
TLDR
This work extends unstructured factor models by proposing a model that discovers subpopulations or groups of cells from the pool of recorded neurons, and shows that it uncovers meaningful clustering structure in the data.
Importance Weighted Autoencoders
TLDR
The importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting, shows empirically that IWAEs learn richer latent space representations than VAEs, leading to improved test log- likelihood on density estimation benchmarks.
Auto-Encoding Variational Bayes
TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity
TLDR
A novel method for extracting neural trajectories-Gaussian-process factor analysis (GPFA) is presented-which unifies the smoothing and dimensionality-reduction operations in a common probabilistic framework and shows how such methods can be a powerful tool for relating the spiking activity across a neural population to the subject's behavior on a single-trial basis.
...
...