• Corpus ID: 16237352

Average-case hardness of RIP certification

@article{Wang2016AveragecaseHO,
  title={Average-case hardness of RIP certification},
  author={Tengyao Wang and Quentin Berthet and Yaniv Plan},
  journal={ArXiv},
  year={2016},
  volume={abs/1605.09646}
}
The restricted isometry property (RIP) for design matrices gives guarantees for optimal recovery in sparse linear models. It is of high interest in compressed sensing and statistical learning. This property is particularly important for computationally efficient recovery methods. As a consequence, even though it is in general NP-hard to check that RIP holds, there have been substantial efforts to find tractable proxies for it. These would allow the construction of RIP matrices and the… 

Figures from this paper

RIP constants for deterministic compressed sensing matrices-beyond Gershgorin

TLDR
Two novel approaches for improving the RIP constant estimates based on Gershgorin circle theorem for a specific deterministic construction based on Paley tight frames are proposed, obtaining an improvement over the GersHgorin bound by a multiplicative constant.

Average-Case Lower Bounds for Learning Sparse Mixtures, Robust Estimation and Semirandom Adversaries

This paper develops several average-case reduction techniques to show new hardness results for three central high-dimensional statistics problems, implying a statistical-computational gap induced by

Approximately Certifying the Restricted Isometry Property is Hard

  • J. Weed
  • Computer Science
    IEEE Transactions on Information Theory
  • 2018
TLDR
This work shows that it is NP-hard to approximate the range of parameters for which a matrix possesses the RIP with accuracy better than some constant, and is the first work to prove such a claim without any additional assumptions.

Distributional Hardness Against Preconditioned Lasso via Erasure-Robust Designs

TLDR
This work proves a stronger lower bound that overcomes the issue of whether the broad class of Preconditioned Lasso programs provably cannot succeed on polylogarithmically sparse signals with a sublinear number of samples, and shows that standard sparse random designs are with high probability robust to adversarial measurement erasures.

Optimal Average-Case Reductions to Sparse PCA: From Weak Assumptions to Strong Hardness

TLDR
A reduction from PC is given that yields the first full characterization of the computational barrier in the spiked covariance model, providing tight lower bounds at all sparsities of sparse PCA, the first instance of a suboptimal hardness assumption implying optimal lower bounds for another problem in unsupervised learning.

Reducibility and Computational Lower Bounds for Problems with Planted Sparse Structure

TLDR
This work introduces several new techniques to give a web of average-case reductions showing strong computational lower bounds based on the planted clique conjecture using natural problems as intermediates, including tight lower bounds for Planted Independent Set, Planted Dense Subgraph, Sparse Spiked Wigner, and Sparse PCA.

On the well-spread property and its relation to linear regression

TLDR
It is shown that it is possible to efficiently certify whether a given =-by-3 Gaussian matrix is well-spread if the number of observations is quadratic in the ambient dimension and the average-case time complexity of certifying wellspreadness of random matrices is investigated.

Reducibility and Statistical-Computational Gaps from Secret Leakage

TLDR
This work gives the first evidence that an expanded set of hardness assumptions, such as for secret leakage planted clique, may be a key first step towards a more complete theory of reductions among statistical problems.

On the Equivalence of Sparse Statistical Problems

TLDR
This paper shows how to efficiently transform a blackbox solver for SLR into an algorithm for SPCA, which achieves state of the art performance, matching guarantees for testing and for support recovery under the single spiked covariance model as obtained by the current best polynomial-time algorithms.

Spectral methods and computational trade-offs in high-dimensional statistical inference

TLDR
It is shown through reduction from a well-known hard problem in computational complexity theory that the difference in consistency regimes is unavoidable for any randomised polynomial time estimator, hence revealing subtle statistical and computational trade-offs in this problem.

References

SHOWING 1-10 OF 51 REFERENCES

The Computational Complexity of the Restricted Isometry Property, the Nullspace Property, and Related Concepts in Compressed Sensing

TLDR
It is confirmed by showing that for a given matrix A and positive integer k, computing the best constants for which the RIP or NSP hold is, in general, NP-hard.

Hidden Cliques and the Certification of the Restricted Isometry Property

TLDR
It is shown in this paper that restricted isometry parameters cannot be approximated in polynomial time within any constant factor under the assumption that the hidden clique problem is hard.

On verifiable sufficient conditions for sparse signal recovery via ℓ1 minimization

TLDR
It is demonstrated that these characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact sparse ℓ1-recovery and to efficiently computable upper bounds on those s for which a given sensing matrix is s-good.

Computing performance guarantees for compressed sensing

  • Kiryung LeeY. Bresler
  • Computer Science
    2008 IEEE International Conference on Acoustics, Speech and Signal Processing
  • 2008
TLDR
This work uses i\ and semidefinite relaxation into a convex problem and provides tools for the selection of good CS matrices with verified and quantitatively favorable performance.

A Simple Proof of the Restricted Isometry Property for Random Matrices

Abstract We give a simple technique for verifying the Restricted Isometry Property (as introduced by Candès and Tao) for random matrices that underlies Compressed Sensing. Our approach has two main

Lower bounds on the performance of polynomial-time algorithms for sparse linear regression

TLDR
This work shows that when the design matrix is ill-conditioned, the minimax prediction loss achievable by polynomial-time algorithms can be substantially greater than that of an optimal algorithm.

Testing the nullspace property using semidefinite programming

TLDR
This work uses semidefinite relaxation techniques to test the nullspace property on A and shows on some numerical examples that these relaxation bounds can prove perfect recovery of sparse solutions with relatively high cardinality.

Certifying the Restricted Isometry Property is Hard

TLDR
It is demonstrated that testing whether a matrix satisfies RIP is NP-hard, which means it is impossible to efficiently test for RIP provided P ≠ NP.

Statistical Algorithms and a Lower Bound for Detecting Planted Cliques

TLDR
The main application is a nearly optimal lower bound on the complexity of any statistical query algorithm for detecting planted bipartite clique distributions when the planted clique has size O(n1/2 − δ) for any constant δ > 0.

Computational Barriers in Minimax Submatrix Detection

TLDR
The minimax detection of a small submatrix of elevated mean in a large matrix contaminated by additive Gaussian noise is studied and it is shown that the hardness of attaining the minimax estimation rate can crucially depend on the loss function.
...