# Sparse Approximate Solutions to Linear Systems

@article{Natarajan1995SparseAS, title={Sparse Approximate Solutions to Linear Systems}, author={Balas K. Natarajan}, journal={SIAM J. Comput.}, year={1995}, volume={24}, pages={227-234} }

The following problem is considered: given a matrix $A$ in ${\bf R}^{m \times n}$, ($m$ rows and $n$ columns), a vector $b$ in ${\bf R}^m$, and ${\bf \epsilon} > 0$, compute a vector $x$ satisfying $\| Ax - b \|_2 \leq {\bf \epsilon}$ if such exists, such that $x$ has the fewest number of non-zero entries over all such vectors. It is shown that the problem is NP-hard, but that the well-known greedy heuristic is good in that it computes a solution with at most $\left\lceil 18 \mbox{ Opt} ({\bf…

## 2,654 Citations

### A Perturbation Inequality for the Schatten-$p$ Quasi-Norm and Its Applications to Low-Rank Matrix Recovery

- MathematicsArXiv
- 2012

A perturbation inequality for the so--called Schatten $p$--quasi--norm is obtained, which allows the validity of a number of previously conjectured conditions for the recovery of low--rank matrices via the popular Schatten p-norm heuristic to be confirmed.

### Sparse representation of vectors in lattices and semigroups

- Computer Science, MathematicsMath. Program.
- 2022

The authors' bounds can be seen as functions naturally generalizing the rank of a matrix over R, to other subdomains such as Z, where these new rank-like functions are all NP-hard to compute in general, but polynomial-time computable for fixed number of variables.

### Recovery of sparsest signals via $\ell^q$-minimization

- Mathematics, Computer Science
- 2010

It is proved that every s-sparse vector in R can be exactly recovered from the measurement vector z via some $\ell^q$-minimization with 0< q\le 1$.

### Sparse Convex Optimization via Adaptively Regularized Hard Thresholding

- Computer ScienceICML
- 2020

A new Adaptively Regularized Hard Thresholding (ARHT) algorithm that makes significant progress on this problem by bringing the bound down to $\gamma=O(\kappa)$, which has been shown to be tight for a general class of algorithms including LASSO, OMP, and IHT.

### Recovery of Sparse Representations by Polytope Faces Pursuit

- Computer ScienceICA
- 2006

The proposed algorithm, which is based on the geometry of the polar polytope, is called Polytope Faces Pursuit and produces good results on examples that are known to be hard for MP, and it is faster than the interior point method for BP on the experiments presented.

### Sparse Regression via Range Counting

- Computer Science, MathematicsSWAT
- 2020

This work describes a $O(n^{k-1} \log^{d-k+2} n)-time randomized $(1+\varepsilon)$-approximation algorithm for the sparse regression problem, and provides a simple $O_\delta(n-1+ \delta})-time deterministic exact algorithm, for any \(\delta > 0\).

### Complexity of unconstrained $$L_2-L_p$$ minimization

- Mathematics, Computer ScienceMath. Program.
- 2014

Theoretical results show that the minimizers of the L_q-L_p minimization problem have various attractive features due to the concavity and non-Lipschitzian property of the regularization function.

### On the Power of Preconditioning in Sparse Linear Regression

- Computer Science2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS)
- 2022

The preconditioned Lasso can solve a large class of sparse linear regression problems nearly optimally: it succeeds whenever the dependency structure of the covariates, in the sense of the Markov property, has low treewidth - even if $\Sigma$ is highly ill-conditioned.

### Deterministic Sparse Column Based Matrix Reconstruction via Greedy Approximation of SVD

- Computer ScienceISAAC
- 2008

This is the first deterministic algorithm with performance guarantees on the number of columns and a (1 + e) approximation ratio in Frobenius norm and is obtained by combining the well known sparse approximation problem from information theory with an existence result on the possibility of sparse approximation.

## References

SHOWING 1-10 OF 10 REFERENCES

### Approximation algorithms for combinatorial problems

- Computer Science, MathematicsSTOC
- 1973

For the problem of finding the maximum clique in a graph, no algorithm has been found for which the ratio does not grow at least as fast as 0(nε), where n is the problem size and ε> 0 depends on the algorithm.

### Rank degeneracy and least squares problems

- Mathematics
- 1976

This paper is concerned with least squares problems when the least squares matrix A is near a matrix that is not of full rank. A definition of numerical rank is given. It is shown that under certain…

### Computers and Intractability: A Guide to the Theory of NP-Completeness

- Computer Science
- 1978

It is proved here that the number ofrules in any irredundant Horn knowledge base involving n propositional variables is at most n 0 1 times the minimum possible number of rules.

### Computational geometry: an introduction

- Physics
- 1985

This book offers a coherent treatment, at the graduate textbook level, of the field that has come to be known in the last decade or so as computational geometry.

### Occam's razor for functions

- Computer ScienceCOLT '93
- 1993

It is shown that the existence of an Om approximation is sufficient to guarantee the probably approximate learnability of classes of functions on the reals even in the presence of arbkwily large but random additive noise.

### Information Theory and Reliable Communication

- Computer Science
- 1968

This chapter discusses Coding for Discrete Sources, Techniques for Coding and Decoding, and Source Coding with a Fidelity Criterion.

### Interpolation of scattered data: Distance matrices and conditionally positive definite functions

- Mathematics
- 1986

Among other things, we prove that multiquadric surface interpolation is always solvable, thereby settling a conjecture of R. Franke.

### Theory and applications of the multi-quadric biharmonic method

- Comput. Math. Appl
- 1990

### The null space problem I

- Complexity, SIAM J. Alg. Disc. Meth
- 1986