• Corpus ID: 9272289

Compressed Sensing using Generative Models

@inproceedings{Bora2017CompressedSU,
  title={Compressed Sensing using Generative Models},
  author={Ashish Bora and Ajil Jalal and Eric Price and Alexandros G. Dimakis},
  booktitle={ICML},
  year={2017}
}
The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain. For almost all results in this literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we suppose that vectors lie near the range of a generative model G… 
Compressed sensing and generative models
  • Eric Price
  • Computer Science
    Optical Engineering + Applications
  • 2019
TLDR
This paper will describe how to incorporate the measurement process in generative adversarial network (GAN) training, even if the noisy data does not uniquely identify the non-noisy signal, the distribution of noisy data may still uniquely identifyThe distribution of non- noisy signals.
On the Power of Compressed Sensing with Generative Models
TLDR
It is shown that generative models generalize sparsity as a representation of structure by constructing a ReLU-based neural network with 2 hidden layers and O(n) activations per layer whose range is precisely the set of all k-sparse vectors.
Robust compressed sensing using generative models
TLDR
This paper proposes an algorithm inspired by the Median-of-Means (MOM) that guarantees recovery for heavy-tailed data, even in the presence of outliers, and results show the novel MOM-based algorithm enjoys the same sample complexity guarantees as ERM under sub-Gaussian assumptions.
Robust compressed sensing of generative models
TLDR
This paper proposes an algorithm inspired by the Median-of-Means (MOM) that guarantees recovery for heavy-tailed data, even in the presence of outliers, and shows the predicted robustness.
Modeling Sparse Deviations for Compressed Sensing using Generative Models
TLDR
Sarse-Gen is proposed, a framework that allows for sparse deviations from the support set, thereby achieving the best of both worlds by using a domain specific prior and allowing reconstruction over the full space of signals.
Deep Compressed Sensing
TLDR
Borrowing insights from the CS perspective, a novel way of improving GANs using gradient information from the discriminator is developed and it is shown that Generative Adversarial Nets (GANs) can be viewed as a special case in this family of models.
Task-Aware Compressed Sensing with Generative Adversarial Networks
TLDR
This paper uses Generative Adversarial Networks (GANs) to impose structure in compressed sensing problems, replacing the usual sparsity constraint, and proposes to train the GANs in a task-aware fashion, specifically for reconstruction tasks.
Sample Complexity Lower Bounds for Compressive Sensing with Generative Models
  • Zhaoqiang Liu, J. Scarlett
  • Computer Science
    2020 International Conference on Signal Processing and Communications (SPCOM)
  • 2020
TLDR
This paper establishes corresponding algorithm-independent lower bounds on the sample complexity using tools from minimax statistical analysis, and establishes that the scaling laws derived in (Bora et al., 2017) are optimal or near-optimal in the absence of further assumptions.
Constant-Expansion Suffices for Compressed Sensing with Generative Priors
TLDR
A novel uniform concentration theorem for random functions that might not be Lipschitz but satisfy a relaxed notion which is called "pseudo-Lipschitzerness" is proved, which yields improvements in all known results in the literature on compressed sensing with deep generative priors, including one-bit recovery, phase retrieval, low-rank matrix recovery, and more.
Optimal Sample Complexities for Compressed Sensing with Approximate Generative Priors
TLDR
This work implements the conditional resampling estimator for deep generative priors using Langevin dynamics, and empirically finds that it produces accurate estimates with more diversity than MAP.
...
...

References

SHOWING 1-10 OF 47 REFERENCES
Model-Based Compressive Sensing
TLDR
A model-based CS theory is introduced that parallels the conventional theory and provides concrete guidelines on how to create model- based recovery algorithms with provable performance guarantees and a new class of structured compressible signals along with a new sufficient condition for robust structured compressable signal recovery that is the natural counterpart to the restricted isometry property of conventional CS.
Compressive sensing recovery of spike trains using a structured sparsity model
TLDR
A model-based approach for the design and analysis of robust, efficient CS recovery algorithms that exploit such signal models with structured sparsity, and develops a algorithm that provably recovers any neuronal spike train from M measurements.
Compressed sensing and best k-term approximation
The typical paradigm for obtaining a compressed version of a discrete signal represented by a vector x ∈ R is to choose an appropriate basis, compute the coefficients of x in this basis, and then
Compressed Sensing and Dictionary Learning
TLDR
The problem of dictionary learning, its applications, and existing solutions are introduced, as well as recent results extending the theory to the case of sparsity in tight frames.
Bayesian Compressive Sensing
TLDR
The underlying theory, an associated algorithm, example results, and comparisons to other compressive-sensing inversion algorithms in the literature are presented.
Random Projections of Smooth Manifolds
Abstract We propose a new approach for nonadaptive dimensionality reduction of manifold-modeled data, demonstrating that a small number of random linear projections can preserve key information about
Compressed sensing
  • D. Donoho
  • Mathematics
    IEEE Transactions on Information Theory
  • 2006
TLDR
It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Inverting the Generator of a Generative Adversarial Network
TLDR
This paper introduces a technique, inversion, to project data samples, specifically images, to the latent space using a pretrained GAN, and demonstrates how the proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets.
Adversarial Feature Learning
TLDR
Bidirectional Generative Adversarial Networks are proposed as a means of learning the inverse mapping of GANs, and it is demonstrated that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.
Precise Recovery of Latent Vectors from Generative Adversarial Networks
TLDR
Stochastic clipping is introduced, a simple, gradient-based technique called stochastic clipping that precisely recover their latent vector pre-images 100% of the time and appears to recover unique encodings for unseen images.
...
...