#### Filter Results:

- Full text PDF available (14)

#### Publication Year

2009

2013

#### Publication Type

#### Co-author

#### Publication Venue

#### Key Phrases

Learn More

- Liam Paninski, Yashar Ahmadian, +5 authors Wei Wu
- Journal of Computational Neuroscience
- 2010

State space methods have proven indispensable in neural data analysis. However, common methods for performing inference in state-space models with non-Gaussian observations rely on certain approximations which are not always accurate. Here we review direct optimization methods that avoid these approximations, but that nonetheless retain the computational… (More)

- Kamiar Rahnama Rad, Liam Paninski
- Network
- 2010

Estimating two-dimensional firing rate maps is a common problem, arising in a number of contexts: the estimation of place fields in hippocampus, the analysis of temporally nonstationary tuning curves in sensory and motor areas, the estimation of firing rates following spike-triggered covariance analyses, etc. Here we introduce methods based on Gaussian… (More)

— In this paper, we present a model of distributed parameter estimation in networks, where agents have access to partially informative measurements over time. Each agent faces a local identification problem, in the sense that it cannot consistently estimate the parameter in isolation. We prove that, despite local identification problems, if agents update… (More)

- Kamiar Rahnama Rad
- IEEE Transactions on Information Theory
- 2011

Consider the <i>n</i>-dimensional vector <i>y</i>=<i>X</i>β+ε where β ∈ \BBR<i>p</i> has only <i>k</i> nonzero entries and ε ∈ \BBR<i>n</i> is a Gaussian noise. This can be viewed as a linear system with sparsity constraints corrupted by noise, where the objective is to estimate the sparsity pattern of β… (More)

- Taro Toyoizumi, Kamiar Rahnama Rad, Liam Paninski
- Neural Computation
- 2009

There has recently been a great deal of interest in inferring network connectivity from the spike trains in populations of neurons. One class of useful models that can be fit easily to spiking data is based on generalized linear point process models from statistics. Once the parameters for these models are fit, the analyst is left with a nonlinear spiking… (More)

- Kamiar Rahnama Rad, Liam Paninski
- NIPS
- 2011

Many fundamental questions in theoretical neuroscience involve optimal decoding and the computation of Shannon information rates in populations of spiking neurons. In this paper, we apply methods from the asymptotic theory of statistical inference to obtain a clearer analytical understanding of these quantities. We find that for large neural populations… (More)

- Pooya Molavi, Ali Jadbabaie, Kamiar Rahnama Rad, Alireza Tahbaz-Salehi
- IEEE Journal of Selected Topics in Signal…
- 2013

This paper focuses on a model of opinion formation over networks with continuously flowing new information and studies the relationship between the network and information structures and agents' ability to reach agreement. At each time period, agents receive private signals in addition to observing the beliefs held by their neighbors in a network. Each… (More)

— We analyze a model of social learning in which agents desire to identify an unknown state of the world using both their private observations and information they obtain when communicating with agents in their social neighborhood. Every agent holds a belief that represents her opinion on how likely it is for each of several possible states to be the true… (More)

- Ioannis Kontoyiannis, Kamiar Rahnama Rad, Savvas Gitzenis
- 2010 IEEE Information Theory Workshop on…
- 2010

A new method is presented for the optimal or near-optimal quantization of memoryless Gaussian data. The basic construction of the codebook is motivated by related ideas in the statistical framework of sparse recovery in linear regression. Similarly, the encoding is performed by a convex-hull iterative algorithm. Preliminary theoretical results establish the… (More)

- Kamiar Rahnama Rad
- 2009 43rd Annual Conference on Information…
- 2009

Imagine the vector y = Xβ + ε where β ∈ ℝ<sup>m</sup> has only k non zero entries and ε ∈ R<sup>n</sup> is a Gaussian noise. This can be viewed as a linear system with sparsity constraints corrupted with noise. We find a non-asymptotic upper bound on the error probability of exact recovery of the sparsity… (More)