# Sparse Approximate Solutions to Linear Systems

@article{Natarajan1995SparseAS,
title={Sparse Approximate Solutions to Linear Systems},
author={Balas K. Natarajan},
journal={SIAM J. Comput.},
year={1995},
volume={24},
pages={227-234}
}
• B. Natarajan
• Published 1 April 1995
• Computer Science
• SIAM J. Comput.
The following problem is considered: given a matrix $A$ in ${\bf R}^{m \times n}$, ($m$ rows and $n$ columns), a vector $b$ in ${\bf R}^m$, and ${\bf \epsilon} > 0$, compute a vector $x$ satisfying $\| Ax - b \|_2 \leq {\bf \epsilon}$ if such exists, such that $x$ has the fewest number of non-zero entries over all such vectors. It is shown that the problem is NP-hard, but that the well-known greedy heuristic is good in that it computes a solution with at most $\left\lceil 18 \mbox{ Opt} ({\bf… 2,654 Citations • Mathematics ArXiv • 2012 A perturbation inequality for the so--called Schatten$p$--quasi--norm is obtained, which allows the validity of a number of previously conjectured conditions for the recovery of low--rank matrices via the popular Schatten p-norm heuristic to be confirmed. • Computer Science, Mathematics Math. Program. • 2022 The authors' bounds can be seen as functions naturally generalizing the rank of a matrix over R, to other subdomains such as Z, where these new rank-like functions are all NP-hard to compute in general, but polynomial-time computable for fixed number of variables. • Qiyu Sun • Mathematics, Computer Science • 2010 It is proved that every s-sparse vector in R can be exactly recovered from the measurement vector z via some$\ell^q$-minimization with 0< q\le 1$.
• Computer Science
ICML
• 2020
A new Adaptively Regularized Hard Thresholding (ARHT) algorithm that makes significant progress on this problem by bringing the bound down to $\gamma=O(\kappa)$, which has been shown to be tight for a general class of algorithms including LASSO, OMP, and IHT.
The proposed algorithm, which is based on the geometry of the polar polytope, is called Polytope Faces Pursuit and produces good results on examples that are known to be hard for MP, and it is faster than the interior point method for BP on the experiments presented.
• Computer Science, Mathematics
SWAT
• 2020
This work describes a $O(n^{k-1} \log^{d-k+2} n)-time randomized$(1+\varepsilon)$-approximation algorithm for the sparse regression problem, and provides a simple$O_\delta(n-1+ \delta})-time deterministic exact algorithm, for any $$\delta > 0$$.
• Mathematics, Computer Science
Math. Program.
• 2014
Theoretical results show that the minimizers of the L_q-L_p minimization problem have various attractive features due to the concavity and non-Lipschitzian property of the regularization function.
• Computer Science
2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS)
• 2022
The preconditioned Lasso can solve a large class of sparse linear regression problems nearly optimally: it succeeds whenever the dependency structure of the covariates, in the sense of the Markov property, has low treewidth - even if $\Sigma$ is highly ill-conditioned.
• Computer Science
ISAAC
• 2008
This is the first deterministic algorithm with performance guarantees on the number of columns and a (1 + e) approximation ratio in Frobenius norm and is obtained by combining the well known sparse approximation problem from information theory with an existence result on the possibility of sparse approximation.

## References

SHOWING 1-10 OF 10 REFERENCES

For the problem of finding the maximum clique in a graph, no algorithm has been found for which the ratio does not grow at least as fast as 0(nε), where n is the problem size and ε> 0 depends on the algorithm.
• Mathematics
• 1976
This paper is concerned with least squares problems when the least squares matrix A is near a matrix that is not of full rank. A definition of numerical rank is given. It is shown that under certain
• Computer Science
• 1978
It is proved here that the number ofrules in any irredundant Horn knowledge base involving n propositional variables is at most n 0 1 times the minimum possible number of rules.
• Physics
• 1985
This book offers a coherent treatment, at the graduate textbook level, of the field that has come to be known in the last decade or so as computational geometry.
It is shown that the existence of an Om approximation is sufficient to guarantee the probably approximate learnability of classes of functions on the reals even in the presence of arbkwily large but random additive noise.
This chapter discusses Coding for Discrete Sources, Techniques for Coding and Decoding, and Source Coding with a Fidelity Criterion.
Among other things, we prove that multiquadric surface interpolation is always solvable, thereby settling a conjecture of R. Franke.

### Theory and applications of the multi-quadric biharmonic method

• Comput. Math. Appl
• 1990

### The null space problem I

• Complexity, SIAM J. Alg. Disc. Meth
• 1986