Sparse Approximate Solutions to Linear Systems

@article{Natarajan1995SparseAS,
  title={Sparse Approximate Solutions to Linear Systems},
  author={Balas K. Natarajan},
  journal={SIAM J. Comput.},
  year={1995},
  volume={24},
  pages={227-234}
}
  • B. Natarajan
  • Published 1 April 1995
  • Computer Science
  • SIAM J. Comput.
The following problem is considered: given a matrix $A$ in ${\bf R}^{m \times n}$, ($m$ rows and $n$ columns), a vector $b$ in ${\bf R}^m$, and ${\bf \epsilon} > 0$, compute a vector $x$ satisfying $\| Ax - b \|_2 \leq {\bf \epsilon}$ if such exists, such that $x$ has the fewest number of non-zero entries over all such vectors. It is shown that the problem is NP-hard, but that the well-known greedy heuristic is good in that it computes a solution with at most $\left\lceil 18 \mbox{ Opt} ({\bf… 

A Perturbation Inequality for the Schatten-$p$ Quasi-Norm and Its Applications to Low-Rank Matrix Recovery

A perturbation inequality for the so--called Schatten $p$--quasi--norm is obtained, which allows the validity of a number of previously conjectured conditions for the recovery of low--rank matrices via the popular Schatten p-norm heuristic to be confirmed.

Sparse representation of vectors in lattices and semigroups

The authors' bounds can be seen as functions naturally generalizing the rank of a matrix over R, to other subdomains such as Z, where these new rank-like functions are all NP-hard to compute in general, but polynomial-time computable for fixed number of variables.

Recovery of sparsest signals via $\ell^q$-minimization

  • Qiyu Sun
  • Mathematics, Computer Science
  • 2010
It is proved that every s-sparse vector in R can be exactly recovered from the measurement vector z via some $\ell^q$-minimization with 0< q\le 1$.

Sparse Convex Optimization via Adaptively Regularized Hard Thresholding

A new Adaptively Regularized Hard Thresholding (ARHT) algorithm that makes significant progress on this problem by bringing the bound down to $\gamma=O(\kappa)$, which has been shown to be tight for a general class of algorithms including LASSO, OMP, and IHT.

Recovery of Sparse Representations by Polytope Faces Pursuit

The proposed algorithm, which is based on the geometry of the polar polytope, is called Polytope Faces Pursuit and produces good results on examples that are known to be hard for MP, and it is faster than the interior point method for BP on the experiments presented.

Recovery of sparsest signals via ℓq-minimization

  • Qiyu Sun
  • Mathematics, Computer Science
    ArXiv
  • 2010

Sparse Regression via Range Counting

This work describes a $O(n^{k-1} \log^{d-k+2} n)-time randomized $(1+\varepsilon)$-approximation algorithm for the sparse regression problem, and provides a simple $O_\delta(n-1+ \delta})-time deterministic exact algorithm, for any \(\delta > 0\).

Complexity of unconstrained $$L_2-L_p$$ minimization

Theoretical results show that the minimizers of the L_q-L_p minimization problem have various attractive features due to the concavity and non-Lipschitzian property of the regularization function.

On the Power of Preconditioning in Sparse Linear Regression

The preconditioned Lasso can solve a large class of sparse linear regression problems nearly optimally: it succeeds whenever the dependency structure of the covariates, in the sense of the Markov property, has low treewidth - even if $\Sigma$ is highly ill-conditioned.

Deterministic Sparse Column Based Matrix Reconstruction via Greedy Approximation of SVD

This is the first deterministic algorithm with performance guarantees on the number of columns and a (1 + e) approximation ratio in Frobenius norm and is obtained by combining the well known sparse approximation problem from information theory with an existence result on the possibility of sparse approximation.
...

References

SHOWING 1-10 OF 10 REFERENCES

Approximation algorithms for combinatorial problems

For the problem of finding the maximum clique in a graph, no algorithm has been found for which the ratio does not grow at least as fast as 0(nε), where n is the problem size and ε> 0 depends on the algorithm.

Rank degeneracy and least squares problems

This paper is concerned with least squares problems when the least squares matrix A is near a matrix that is not of full rank. A definition of numerical rank is given. It is shown that under certain

Computers and Intractability: A Guide to the Theory of NP-Completeness

It is proved here that the number ofrules in any irredundant Horn knowledge base involving n propositional variables is at most n 0 1 times the minimum possible number of rules.

Sparse approximate multiquadric interpolation

Computational geometry: an introduction

This book offers a coherent treatment, at the graduate textbook level, of the field that has come to be known in the last decade or so as computational geometry.

Occam's razor for functions

It is shown that the existence of an Om approximation is sufficient to guarantee the probably approximate learnability of classes of functions on the reals even in the presence of arbkwily large but random additive noise.

Information Theory and Reliable Communication

This chapter discusses Coding for Discrete Sources, Techniques for Coding and Decoding, and Source Coding with a Fidelity Criterion.

Interpolation of scattered data: Distance matrices and conditionally positive definite functions

Among other things, we prove that multiquadric surface interpolation is always solvable, thereby settling a conjecture of R. Franke.

Theory and applications of the multi-quadric biharmonic method

  • Comput. Math. Appl
  • 1990

The null space problem I

  • Complexity, SIAM J. Alg. Disc. Meth
  • 1986