• Corpus ID: 46346135

Extracting low-dimensional dynamics from multiple large-scale neural population recordings by learning to predict correlations

  title={Extracting low-dimensional dynamics from multiple large-scale neural population recordings by learning to predict correlations},
  author={Marcel Nonnenmacher and Srinivas C. Turaga and Jakob H. Macke},
A powerful approach for understanding neural population dynamics is to extract low-dimensional trajectories from population recordings using dimensionality reduction methods. Current approaches for dimensionality reduction on neural data are limited to single population recordings, and can not identify dynamics embedded across multiple measurements. We propose an approach for extracting low-dimensional dynamics from multiple, sequential recordings. Our algorithm scales to data comprising… 

Figures from this paper

Inferring single-trial neural population dynamics using sequential auto-encoders

LFADS, a deep learning method for analyzing neural population activity, can extract neural dynamics from single-trial recordings, stitch separate datasets into a single model, and infer perturbations, for example, from behavioral choices to these dynamics.

Comparing high-dimensional neural recordings by aligning their low-dimensional latent representations

Algorithms that map two datasets into a shared space where they can be directly compared, and it is argued that alignment is key for comparing high-dimensional neural activities across times, subsets of neurons, and individuals.

Deep inference of latent dynamics with spatio-temporal super-resolution using selective backpropagation through time

It is demonstrated that it is possible to obtain spatio-temporal super-resolution in neuronal time series by exploiting relationships among neurons, embedded in latent low-dimensional population dynamics.

Hierarchical recurrent state space models reveal discrete and continuous dynamics of neural activity in C. elegans

This work develops state space models that decompose neural time-series into segments with simple, linear dynamics and incorporates these models into a hierarchical framework that combines partial recordings from many worms to learn shared structure, while still allowing for individual variability.

Probing the Relationship Between Latent Linear Dynamical Systems and Low-Rank Recurrent Neural Network Models

This work examines the precise relationship between latent LDS models and linear low-rank RNNs, and shows that a partially observed RNN is better represented by an LDS model than by an RNN consisting of only observed units.

Predicting synchronous firing of large neural populations from sequential recordings

This work shows that it can infer the activity of a full population of retina ganglion cells from sequential recordings, using a novel method based on copula distributions and maximum entropy modeling, and could generalize to predict the population responses to different stimuli and even to different experiments.

Long-term stability of cortical population dynamics underlying consistent behavior

It is reported that latent dynamics in the neural manifold across three cortical areas are stable throughout years of consistent behavior, and the authors posit that these dynamics are fundamental building blocks of learned behavior.

Unsupervised methods for large-scale, cell-resolution neural data analysis

This work investigates how to best transform two-photon calcium imaging microscopy recordings into an easier-to-analyse matrix containing time courses of individual neurons, and describes an interpretable non-linear dynamical model of neural population activity.

Neural Latents Benchmark '21: Evaluating latent variable models of neural population activity

This work introduces a benchmark suite for latent variable modeling of neural population activity, and identifies unsupervised evaluation as a common framework for evaluating models across datasets, and applies several baselines that demonstrate benchmark diversity.

Discrete Sequential Information Coding: Heteroclinic Cognitive Dynamics

A methodology to build simple kinetic equations that can be the mathematical skeleton of a dynamical theory of consciousness is discussed, based on winnerless competition low-frequency dynamics, which leads to a large variety of coding regimes that are invariant in time.



Robust learning of low-dimensional dynamics from large neural ensembles

This work shows on model data that the parameters of latent linear dynamical systems can be recovered, and that even if the dynamics are not stationary the authors can still recover the true latent subspace, and demonstrates an extension of nuclear norm minimization that can separate sparse local connections from global latent dynamics.

Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity

A novel method for extracting neural trajectories-Gaussian-process factor analysis (GPFA) is presented-which unifies the smoothing and dimensionality-reduction operations in a common probabilistic framework and shows how such methods can be a powerful tool for relating the spiking activity across a neural population to the subject's behavior on a single-trial basis.

Dimensionality reduction for large-scale neural recordings

This review examines three important motivations for population studies: single-trial hypotheses requiring statistical power, hypotheses of population response structure and exploratory analyses of large data sets, and practical advice about selecting methods and interpreting their outputs.

Spectral learning of linear dynamics from generalised-linear observations with application to neural population data

The extended subspace identification algorithm is consistent and accurately recovers the correct parameters on large simulated data sets with a single calculation, avoiding the costly iterative computation of approximate expectation-maximisation (EM).

Clustered factor analysis of multineuronal spike data

This work extends unstructured factor models by proposing a model that discovers subpopulations or groups of cells from the pool of recorded neurons, and shows that it uncovers meaningful clustering structure in the data.

Empirical models of spiking in neural populations

This work argues that in the cortex, where firing exhibits extensive correlations in both time and space and where a typical sample of neurons still reflects only a very small fraction of the local population, the most appropriate model captures shared variability by a low-dimensional latent process evolving with smooth dynamics, rather than by putative direct coupling.

Inferring neural population dynamics from multiple partial recordings of the same neural circuit

A statistical method for "stitching" together sequentially imaged sets of neurons into one model by phrasing the problem as fitting a latent dynamical system with missing observations is added, which allows us to substantially expand the population-sizes for which population dynamics can be characterized—beyond the number of simultaneously imaged neurons.

Automated long-term recording and analysis of neural activity in behaving animals

An automated platform for continuous long-term recordings of neural activity and behavior in rodents continuously (24/7) over months and a fully automated spike-sorting algorithm allows single units to be tracked over weeks of recording.

Efficient "Shotgun" Inference of Neural Connectivity from Highly Sub-sampled Activity Data

Using a generalized linear model for a spiking recurrent neural network, a scalable approximate expected loglikelihood-based Bayesian method is developed to perform network inference given this type of data, in which only a small fraction of the network is observed in each time bin.

Smoothing of, and Parameter Estimation from, Noisy Biophysical Recordings

Biophysically detailed models of single cells are difficult to fit to real data. Recent advances in imaging techniques allow simultaneous access to various intracellular variables, and these data can