# Recovery of Binary Sparse Signals With Biased Measurement Matrices

@article{Flinth2019RecoveryOB, title={Recovery of Binary Sparse Signals With Biased Measurement Matrices}, author={Axel Flinth and Sandra Keiper}, journal={IEEE Transactions on Information Theory}, year={2019}, volume={65}, pages={8084-8094} }

This paper treats the recovery of sparse, binary signals through box-constrained basis pursuit using biased measurement matrices. Using a probabilistic model, we provide conditions under which the recovery of both sparse and saturated binary signals is very likely. In fact, we also show that under the same condition, the solution of the boxed-constrained basis pursuit program can be found using boxed-constrained least squares.

## 7 Citations

### Recovery of Binary Sparse Signals from Structured Biased Measurements

- Computer ScienceArXiv
- 2020

The reconstruction of binary sparse signals from partial random circulant measurements through the least-squares algorithm is shown to be as good as the reconstruction via the usually used program basis pursuit.

### Sparse Non-Negative Recovery from Biased Subgaussian Measurements using NNLS

- Computer ScienceArXiv
- 2019

It is shown that NSP holds with high probability for biased subgaussian matrices and its quality is independent of the bias.

### Sparse Non-Negative Recovery from Shifted Symmetric Subgaussian Measurements using NNLS

- Computer Science2019 IEEE International Symposium on Information Theory (ISIT)
- 2019

It is shown that NSP holds with high probability for shifted symmetric subgaussian matrices and its quality is independent of the bias and established a debiased version of Mendelson’s small ball method.

### Sparse Recovery With Integrality Constraints

- Computer ScienceDiscret. Appl. Math.
- 2020

### Approximation of signals and functions in high dimensions with low dimensional structure: finite-valued sparse signals and generalized ridge functions

- Computer Science, Mathematics
- 2020

In this thesis, we consider the following class of high dimensional functions f : X → R, f (x) = g(dist(x, L)), where L is a linear subspace of a usually high dimensional space X = RN and g : Ω ⊂ R →…

### Message Passing Based Joint Channel and User Activity Estimation for Uplink Grant-Free Massive MIMO Systems With Low-Precision ADCs

- Computer ScienceIEEE Signal Processing Letters
- 2020

A new algorithm leveraging hybrid generalized approximate passing (HyGAMP) is developed including GAMP part (channel estimation) and loopy belief propagation (LBP) part (user activity detection), where the strong correlation among elements in each row of the channel matrix can be decoupled in LBP part.

### Quantitative Group Testing and the rank of random matrices

- Mathematics, Computer ScienceArXiv
- 2020

Using theoretical analysis and simulations, the modified algorithms solve the QGT problem for values of $ m $ that are smaller than those required for the original algorithms.

## References

SHOWING 1-10 OF 15 REFERENCES

### PROMP: A sparse recovery approach to lattice-valued signals

- Computer ScienceApplied and Computational Harmonic Analysis
- 2018

### Error correction via linear programming

- Computer Science46th Annual IEEE Symposium on Foundations of Computer Science (FOCS'05)
- 2005

Suppose we wish to transmit a vector f ϵ Rn reliably. A frequently discussed approach consists in encoding f with an m by n coding matrix A. Assume now that a fraction of the entries of Af are…

### A simple performance analysis of ℓ1 optimization in compressed sensing

- Computer Science2009 IEEE International Conference on Acoustics, Speech and Signal Processing
- 2009

A novel, very simple technique is introduced for proving that if the number of measurements is proportional to the length of the signal then there is a sparsity of the unknown signal proportional to its length for which the success of the l1 optimization is guaranteed.

### An Introduction to Compressed Sensing

- Computer Science
- 2019

This book aims to provide an in-depth initiation to the field of compressed sensing and specific topics include material on graph theory and the design of binary-measurement matrices, matrix recovery and completion, and optimization algorithms.

### Robust Nonnegative Sparse Recovery and the Nullspace Property of 0/1 Measurements

- Computer ScienceIEEE Transactions on Information Theory
- 2018

This work deduces uniform and robust compressed sensing guarantees for nonnegative least squares by establishing the robust nullspace property for random 0/1-matrices—a novel result in its own right.

### Spatial Compressive Sensing for MIMO Radar

- Computer ScienceIEEE Transactions on Signal Processing
- 2014

The coherence and isotropy concepts are used to establish uniform and non- uniform recovery guarantees within the proposed spatial compressive sensing framework, and it is shown that non-uniform recovery is guaranteed if the product of the number of transmit and receive elements, MN, scales with K(logG)2, which is proportional to the array aperture and determines the angle resolution.

### Counting faces of randomly-projected polytopes when the projection radically lowers dimension

- Computer Science
- 2006

This paper develops asymptotic methods to count faces of random high-dimensional polytopes; a seemingly dry and unpromising pursuit that has surprising implications in statistics, probability, information theory, and signal processing with potential impacts in practical subjects like medical imaging and digital communications.

### Living on the edge: phase transitions in convex programs with random data

- Computer Science
- 2013

This paper provides the first rigorous analysis that explains why phase transitions are ubiquitous in random convex optimization problems and introduces a summary parameter, called the statistical dimension, that canonically extends the dimension of a linear subspace to the class of convex cones.