#### Filter Results:

#### Publication Year

1998

2017

#### Publication Type

#### Co-author

#### Publication Venue

#### Key Phrases

Learn More

- Roman Vershynin
- ArXiv
- 2010

2 3 This is a tutorial on some basic non-asymptotic methods and concepts in random matrix theory. The reader will learn several tools for the analysis of the extreme singular values of random matrices with independent rows or columns. Many of these methods sprung off from the development of geometric functional analysis since the 1970's. They have… (More)

- Deanna Needell, Roman Vershynin
- Foundations of Computational Mathematics
- 2009

This paper seeks to bridge the two major algorithmic approaches to sparse signal recovery from an incomplete set of linear measurements – L 1-minimization methods and iterative methods (Matching Pursuits). We find a simple regularized version of Orthogonal Matching Pursuit (ROMP) which has advantages of both approaches: the speed and transparency of OMP and… (More)

The Kaczmarz method for solving linear systems of equations is an iterative algorithm that has found many applications ranging from computer tomography to digital signal processing. Despite the popularity of this method, useful theoretical estimates for its rate of convergence are still scarce. We introduce a randomized version of the Kaczmarz method for… (More)

This paper improves upon best-known guarantees for exact reconstruction of a sparse signal f from a small universal sample of Fourier measurements. The method for reconstruction that has recently gained momentum in the sparse approximation theory is to relax this highly nonconvex problem to a convex problem and then solve it as a linear program. We show… (More)

- Yaniv Plan, Roman Vershynin
- IEEE Transactions on Information Theory
- 2013

This paper develops theoretical results regarding noisy 1-bit compressed sensing and sparse binomial regression. We demonstrate that a single convex program gives an accurate estimate of the signal, or coefficient vector, for both of these models. We show that an -sparse signal in can be accurately estimated from m = O(s log(n/s)) single-bit measurements… (More)

- Yurii Lyubarskii, Roman Vershynin
- IEEE Transactions on Information Theory
- 2010

Given a frame in C<sup>n</sup> which satisfies a form of the uncertainty principle (as introduced by Candes and Tao), it is shown how to quickly convert the frame representation of every vector into a more robust Kashin's representation whose coefficients all have the smallest possible dynamic range <i>O(1/</i> √(<i>n)</i>. The information tends to… (More)

- Emmanuel J. Candès, Mark Rudelson, Terence Tao, Roman Vershynin
- 46th Annual IEEE Symposium on Foundations of…
- 2005

Suppose we wish to transmit a vector f ϵ R<sup>n</sup> reliably. A frequently discussed approach consists in encoding f with an m by n coding matrix A. Assume now that a fraction of the entries of Af are corrupted in a completely arbitrary fashion by an error e. We do not know which entries are affected nor do we know how they are affected. Is it… (More)

- Yaniv Plan, Roman Vershynin
- ArXiv
- 2011

We give the first computationally tractable and almost optimal solution to the problem of one-bit compressed sensing, showing how to accurately recover an s-sparse vector x ∈ R n from the signs of O(s log 2 (n/s)) random linear measurements of x. The recovery is achieved by a simple linear program. This result extends to approximately sparse vectors x. Our… (More)

The classical random matrix theory is mostly focused on asymptotic spectral properties of random matrices as their dimensions grow to infinity. At the same time many recent applications from convex geometry to functional analysis to information theory operate with random matrices in fixed dimensions. This survey addresses the non-asymptotic theory of… (More)

We prove two basic conjectures on the distribution of the smallest singular value of random n×n matrices with independent entries. Under minimal moment assumptions, we show that the smallest singular value is of order n −1/2 , which is optimal for Gaussian matrices. Moreover, we give a optimal estimate on the tail probability. This comes as a consequence of… (More)