Iain Johnstone

Learn More
We attempt to recover an n-dimensional vector observed in white noise, where n is large and the vector is known to be sparse, but the degree of sparsity is unknown. We consider three different ways of defining sparsity of a vector: using the fraction of nonzero terms; imposing power-law decay bounds on the ordered entries; and controlling the lp norm for p(More)
Consider estimating the mean vector from data Nn( ; I) with lq norm loss, q 1, when is known to lie in an n-dimensional lp ball, p 2 (0;1). For large n, the ratio of minimax linear risk to minimax risk can be arbitrarily large if p < q. Obvious exceptions aside, the limiting ratio equals 1 only if p = q = 2. Our arguments are mostly indirect, involving a(More)
Suppose we have observations yi = si+zi, i = 1; :::; n, where (si) is signal and (zi) is i.i.d. Gaussian white noise. Suppose we have available a library L of orthogonal bases, such as the Wavelet Packet bases or the Cosine Packet bases of Coifman and Meyer. We wish to select, adaptively based on the noisy data (yi), a basis in which best to recover the(More)
Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse objects. The formula applies to approximate message passing(More)