#### Filter Results:

#### Publication Year

2009

2013

#### Publication Type

#### Co-author

#### Key Phrase

#### Publication Venue

Learn More

State space methods have proven indispensable in neural data analysis. However, common methods for performing inference in state-space models with non-Gaussian observations rely on certain approximations which are not always accurate. Here we review direct optimization methods that avoid these approximations, but that nonetheless retain the computational… (More)

— In this paper, we present a model of distributed parameter estimation in networks, where agents have access to partially informative measurements over time. Each agent faces a local identification problem, in the sense that it cannot consistently estimate the parameter in isolation. We prove that, despite local identification problems, if agents update… (More)

—Consider the n-dimensional vector y = X + where 2 p has only k nonzero entries and 2 n is a Gaussian noise. This can be viewed as a linear system with sparsity constraints corrupted by noise, where the objective is to estimate the sparsity pattern of given the observation vector y and the measurement matrix X. First, we derive a nonasymptotic upper bound… (More)

Estimating two-dimensional firing rate maps is a common problem, arising in a number of contexts: the estimation of place fields in hippocampus, the analysis of temporally nonstationary tuning curves in sensory and motor areas, the estimation of firing rates following spike-triggered covariance analyses, etc. Here we introduce methods based on Gaussian… (More)

There has recently been a great deal of interest in inferring network connectivity from the spike trains in populations of neurons. One class of useful models that can be fit easily to spiking data is based on generalized linear point process models from statistics. Once the parameters for these models are fit, the analyst is left with a nonlinear spiking… (More)

Many fundamental questions in theoretical neuroscience involve optimal decoding and the computation of Shannon information rates in populations of spiking neurons. In this paper, we apply methods from the asymptotic theory of statistical inference to obtain a clearer analytical understanding of these quantities. We find that for large neural populations… (More)

—Imagine the vector y = Xβ + where β ∈ R m has only k non zero entries and ∈ R n is a Gaussian noise. This can be viewed as a linear system with sparsity constraints corrupted with noise. We find a non-asymptotic upper bound on the error probability of exact recovery of the sparsity pattern given any generic measurement matrix X. By drawing X from a… (More)

— We analyze a model of social learning in which agents desire to identify an unknown state of the world using both their private observations and information they obtain when communicating with agents in their social neighborhood. Every agent holds a belief that represents her opinion on how likely it is for each of several possible states to be the true… (More)

—This paper focuses on a model of opinion formation over networks with continuously flowing new information and studies the relationship between the network and information structures and agents' ability to reach agreement. At each time period, agents receive private signals in addition to observing the beliefs held by their neighbors in a network. Each… (More)

Consider the n-dimensional vector y = Xβ + ǫ, where β ∈ R p has only k nonzero entries and ǫ ∈ R n is a Gaussian noise. This can be viewed as a linear system with sparsity constraints , corrupted by noise. We find a non-asymptotic upper bound on the probability that the optimal decoder for β declares a wrong sparsity pattern, given any generic perturbation… (More)