# Lower Bounds for Compressed Sensing with Generative Models

@article{Kamath2019LowerBF, title={Lower Bounds for Compressed Sensing with Generative Models}, author={Akshay Kamath and Sushrut Karmalkar and Eric Price}, journal={ArXiv}, year={2019}, volume={abs/1912.02938} }

The goal of compressed sensing is to learn a structured signal $x$ from a limited number of noisy linear measurements $y \approx Ax$. In traditional compressed sensing, "structure" is represented by sparsity in some known basis. Inspired by the success of deep learning in modeling images, recent work starting with~\cite{BJPD17} has instead considered structure to come from a generative model $G: \mathbb{R}^k \to \mathbb{R}^n$. We present two results establishing the difficulty of this latterâ€¦Â

## 16 Citations

### Robust compressed sensing using generative models

- Computer ScienceNeurIPS
- 2020

This paper proposes an algorithm inspired by the Median-of-Means (MOM) that guarantees recovery for heavy-tailed data, even in the presence of outliers, and results show the novel MOM-based algorithm enjoys the same sample complexity guarantees as ERM under sub-Gaussian assumptions.

### Robust compressed sensing of generative models

- Computer ScienceArXiv
- 2020

This paper proposes an algorithm inspired by the Median-of-Means (MOM) that guarantees recovery for heavy-tailed data, even in the presence of outliers, and shows the predicted robustness.

### Sample Complexity Lower Bounds for Compressive Sensing with Generative Models

- Computer Science2020 International Conference on Signal Processing and Communications (SPCOM)
- 2020

This paper establishes corresponding algorithm-independent lower bounds on the sample complexity using tools from minimax statistical analysis, and establishes that the scaling laws derived in (Bora et al., 2017) are optimal or near-optimal in the absence of further assumptions.

### Sample Complexity Bounds for 1-bit Compressive Sensing and Binary Stable Embeddings with Generative Priors

- Computer Science, MathematicsICML
- 2020

It is demonstrated that the Binary $\epsilon$-Stable Embedding property, which characterizes the robustness of the reconstruction to measurement errors and noise, also holds for 1-bit compressive sensing with Lipschitz continuous generative models with sufficiently many Gaussian measurements.

### Robust One-Bit Recovery via ReLU Generative Networks: Near-Optimal Statistical Rate and Global Landscape Analysis

- Computer Science, MathematicsICML
- 2020

It is proved that the ERM estimator in this new framework achieves a statistical rate of $m=\widetilde{\mathcal{O}}(kn \log d /\varepsilon^2)$ recovering any $G(x_0)$ uniformly up to an error $\varithm factors$ when the network is shallow.

### Information-Theoretic Lower Bounds for Compressive Sensing With Generative Models

- Computer ScienceIEEE Journal on Selected Areas in Information Theory
- 2020

The scaling laws derived in (Bora 2017) are optimal or near-optimal in the absence of further assumptions on the sample complexity using tools from minimax statistical analysis.

### The Generalized Lasso with Nonlinear Observations and Generative Priors

- Computer Science, MathematicsNeurIPS
- 2020

This paper provides a non-uniform recovery guarantee, and shows that this result can be extended to the uniform recovery guarantee under the assumption of a so-called local embedding property, which is satisfied by the 1-bit and censored Tobit models.

### Towards Sample-Optimal Compressive Phase Retrieval with Sparse and Generative Priors

- Computer ScienceNeurIPS
- 2021

This paper provides recovery guarantees with near-optimal sample complexity for phase retrieval with generative priors, and proposes a practical spectral initialization method motivated by recent advances in deep generative models.

### Optimal Sample Complexities for Compressed Sensing with Approximate Generative Priors

- Computer Science
- 2021

This work implements the conditional resampling estimator for deep generative priors using Langevin dynamics, and empirically finds that it produces accurate estimates with more diversity than MAP.

### Instance-Optimal Compressed Sensing via Posterior Sampling

- Computer ScienceICML
- 2021

The posterior sampling estimator for deep generative priors is implemented using Langevin dynamics, and it is empirically found that it produces accurate estimates with more diversity than MAP.

## References

SHOWING 1-8 OF 8 REFERENCES

### Compressed Sensing using Generative Models

- Computer ScienceICML
- 2017

This work shows how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all, and proves that, if G is L-Lipschitz, then roughly O(k log L) random Gaussian measurements suffice for an l2/l2 recovery guarantee.

### Information-Theoretic Lower Bounds for Compressive Sensing With Generative Models

- Computer ScienceIEEE Journal on Selected Areas in Information Theory
- 2020

The scaling laws derived in (Bora 2017) are optimal or near-optimal in the absence of further assumptions on the sample complexity using tools from minimax statistical analysis.

### Constructing Small-Bias Sets from Algebraic-Geometric Codes

- Computer Science, Mathematics2009 50th Annual IEEE Symposium on Foundations of Computer Science
- 2009

An explicit construction of an $\eps$-biased set over $k$ bits of size with parameters nearly matching the lower bound is given, giving binary error correcting codes beating the Gilbert-Varshamov bound.

### Auto-Encoding Variational Bayes

- Computer ScienceICLR
- 2014

A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.

### Generative Adversarial Nets

- Computer ScienceNIPS
- 2014

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aâ€¦

### Lower bounds for sparse recovery

- Computer ScienceSODA '10
- 2010

The bound holds even for the more general version of the problem, where <i>A</i> is a random variable, and the recovery algorithm is required to work for any fixed x with constant probability (over <i>, and the bound is tight.

### Stable signal recovery from incomplete and inaccurate measurements

- Computer Science
- 2005

It is shown that it is possible to recover x0 accurately based on the data y from incomplete and contaminated observations.

### On data structures and asymmetric communication complexity

- Computer ScienceSTOC '95
- 1995

This paper considers two-party communication complexity, the ``asymmetric case'', when the input sizes of the two players differ significantly, and derives two generally applicable methods of proving lower bounds and obtain several applications.