Learn More
Consider anm×N matrix Φ with the Restricted Isometry Property of order k and level δ, that is, the norm of any k-sparse vector in R is preserved to within a multiplicative factor of 1±δ under application of Φ. We show that by randomizing the column signs of such a matrix Φ, the resulting map with high probability embeds any fixed set of p = O(e) points in R(More)
We present and analyze an efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements. The algorithm is designed for the simultaneous promotion of both a minimal nuclear norm and an approximatively low-rank solution. Under the assumption that the linear measurements fulfill(More)
This article presents near-optimal guarantees for accurate and robust image recovery from under-sampled noisy measurements using total variation minimization, and our results may be the first of this kind. In particular, we show that from O(s log(N)) nonadaptive linear measurements, an image can be reconstructed to within the best s-term approximation of(More)
We obtain an improved finite-sample guarantee on the linear convergence of stochastic gradient descent for smooth and strongly convex objectives, improving from a quadratic dependence on the conditioning (L/μ) (where L is a bound on the smoothness and μ on the strong convexity) to a linear dependence on L/μ. Furthermore, we show how reweighting the sampling(More)
In many signal processing applications, one wishes to acquire images that are sparse in transform domains such as spatial finite differences or wavelets using frequency domain samples. For such applications, overwhelming empirical evidence suggests that superior image reconstruction can be obtained through variable density sampling strategies that(More)
We consider the problem of recovering polynomials that are sparse with respect to the basis of Legendre polynomials from a small number of random samples. In particular, we show that a Legendre s-sparse polynomial of maximal degree N can be recovered fromm s log(N) random samples that are chosen independently according to the Chebyshev probability measure(More)
Compressed sensing (CS) decoding algorithms can efficiently recover an N -dimensional real-valued vector x to within a factor of its best k-term approximation by taking m = O(klogN/k) measurements y = Phi<sub>x</sub>. If the sparsity or approximate sparsity level of x were known, then this theoretical guarantee would imply quality assurance of the resulting(More)
Free-discontinuity problems describe situations where the solution of interest is defined by a function and a lower dimensional set consisting of the discontinuities of the function. Hence, the derivative of the solution is assumed to be a ‘small’ function almost everywhere except on sets where it concentrates as a singular measure. This is the case, for(More)
Using tools from semiclassical analysis, we give weighted L∞ estimates for eigenfunctions of strictly convex surfaces of revolution. These estimates give rise to new sampling techniques and provide improved bounds on the number of samples necessary for recovering sparse eigenfunction expansions on surfaces of revolution. On the sphere, our estimates imply(More)