Marius Pachitariu

Learn More
Biological tissue is often composed of cells with similar morphologies replicated throughout large volumes and many biological applications rely on the accurate identification of these cells and their locations from image data. Here we develop a generative model that captures the regularities present in images composed of repeating elements of a few(More)
Neural language models (LMs) based on recurrent neural networks (RNN) are some of the most successful word and character-level LMs. Why do they work so well, in particular better than linear neural LMs? Possible explanations are that RNNs have an implicitly better regularization or that RNNs have a higher capacity for storing patterns due to their(More)
We present a dynamic nonlinear generative model for visual motion based on a latent representation of binary-gated Gaussian variables. Trained on sequences of images, the model learns to represent different movement directions in different variables. We use an online approximate inference scheme that can be mapped to the dynamics of networks of neurons.(More)
We determined how learning modifies neural representations in primary visual cortex (V1) during acquisition of a visually guided behavioral task. We imaged the activity of the same layer 2/3 neuronal populations as mice learned to discriminate two visual patterns while running through a virtual corridor, where one pattern was rewarded. Improvements in(More)
SUMMARY Cortical networks exhibit intrinsic dynamics that drive coordinated, large-scale fluctuations across neuronal populations and create noise correlations that impact sensory coding. To investigate the network-level mechanisms that underlie these dynamics, we developed novel computational techniques to fit a determin-istic spiking network model(More)
Population neural recordings with long-range temporal structure are often best understood in terms of a common underlying low-dimensional dynamical process. Advances in recording technology provide access to an ever-larger fraction of the population, but the standard computational approaches available to identify the collective dynamics scale poorly with(More)
We introduce the Recurrent Generalized Linear Model (R-GLM), an extension of GLMs based on a compact representation of the spiking history through a linear recurrent neural network. R-GLMs match the predictive likelihood of Linear Dynamical Systems (LDS) with linear-Gaussian observations. We also address a disadvantage of GLMs, including the R-GLM, that(More)
The primary mode of information transmission in neural networks is unknown: is it a rate code or a timing code? Assuming that presynaptic spike trains are stochastic and a rate code is used, probabilistic models of spiking can reveal properties of the neural computation performed at the level of single neurons. Here we show that depending on the(More)
  • 1