Learn More
Given i.i.d. observations of a random vector X ∈ R p , we study the problem of estimating both its covariance matrix Σ * , and its inverse covariance or concentration matrix Θ * = (Σ *) −1. When X is multivari-ate Gaussian, the non-zero structure of Θ * is specified by the graph of an associated Gaussian Markov random field; and a popular estimator for such(More)
Consider the standard linear regression model Y = Xβ * +w, where Y ∈ R n is an observation vector, X ∈ R n×d is a design matrix, β * ∈ R d is the unknown regression vector, and w ∼ N (0, σ 2 I) is additive Gaussian noise. This paper studies the minimax rates of convergence for estimation of β * for p-losses and in the 2-prediction loss, assuming that β *(More)
Methods based on ℓ 1-relaxation, such as basis pursuit and the Lasso, are very popular for sparse regression in high dimensions. The conditions for success of these methods are now well-understood: (1) exact recovery in the noiseless setting is possible if and only if the design matrix X satisfies the restricted nullspace property, and (2) the squared ℓ(More)
Sparse additive models are families of d-variate functions that have the additive decomposition f * = j∈S f * j , where S is a unknown subset of cardinality s d. We consider the case where each component function f * j lies in a reproducing kernel Hilbert space, and analyze a simple kernel-based convex program for estimating the unknown function f *.(More)
Consider the high-dimensional linear regression model <i>y</i> = <i>X</i> &#x03B2;<sup>*</sup> + <i>w</i>, where <i>y</i> &#x2208; \BBR<i>n</i> is an observation vector, <i>X</i> &#x2208; \BBR<i>n</i> &#x00D7; <i>d</i> is a design matrix with <i>d</i> &gt;; <i>n</i>, &#x03B2;<sup>*</sup> &#x2208; \BBR<i>d</i> is an unknown regression vector, and <i>w</i> ~(More)
Given i.i.d. observations of a random vector X ∈ R p , we study the problem of estimating both its covariance matrix Σ * , and its inverse covariance or concentration matrix Θ * = (Σ *) −1. We estimate Θ * by minimizing an ℓ1-penalized log-determinant Bregman divergence; in the multivariate Gaussian case, this approach corresponds to ℓ1-penalized maximum(More)
Given n observations of a p-dimensional random vector, the covariance matrix and its inverse (precision matrix) are needed in a wide range of applications. Sample covariance (e.g. its eigenstructure) can misbehave when p is comparable to the sample size n. Regularization is often used to mitigate the problem. In this paper, we proposed an 1 penalized(More)
This paper considers fundamental limits for solving sparse inverse problems in the presence of Poisson noise with physical constraints. Such problems arise in a variety of applications, including photon-limited imaging systems based on compressed sensing (CS). Most prior theoretical results in CS and related inverse problems apply to idealized settings(More)
We consider the problem of estimating the graph structure associated with a Gaussian Markov random field (GMRF) from i.i.d. samples. We study the performance of study the performance of the ℓ1-regularized maximum likelihood estimator in the high-dimensional setting, where the number of nodes in the graph p, the number of edges in the graph s and the maximum(More)
Consider the standard linear regression model Y = Xβ * +w, where Y ∈ R n is an observation vector, X ∈ R n×d is a design matrix, β * ∈ R d is the unknown regression vector, and w ∼ N (0, σ 2 I) is additive Gaussian noise. This paper studies the minimax rates of convergence for estimation of β * for ℓ p-losses and in the ℓ 2-prediction loss, assuming that β(More)