• Corpus ID: 155100232

Deep Compressed Sensing

  title={Deep Compressed Sensing},
  author={Yan Wu and Mihaela Rosca and Timothy P. Lillicrap},
Compressed sensing (CS) provides an elegant framework for recovering sparse signals from compressed measurements. For example, CS can exploit the structure of natural images and recover an image from only a few random measurements. CS is flexible and data efficient, but its application has been restricted by the strong assumption of sparsity and costly reconstruction process. A recent approach that combines CS with neural network generators has removed the constraint of sparsity, but… 

Figures and Tables from this paper

EnGe-CSNet: A Trainable Image Compressed Sensing Model Based on Variational Encoder and Generative Networks
This paper builds a trainable deep compressed sensing model, termed as EnGe-CSNet, by combining Convolution Generative Adversarial Networks and a Variational Autoencoder, and exhibits a better performance than competitive algorithms at high compression rates.
One-Bit Compressive Sensing: Can We Go Deep and Blind?
This work presents a novel data-driven and model-based methodology that achieves blind recovery; i.e., signal recovery without requiring the knowledge of the sensing matrix.
Generative Model Adversarial Training for Deep Compressed Sensing
This work propound how to design such a low-to-high dimensional deep learning-based generator suiting for compressed sensing, while satisfying robustness to universal adversarial perturbations in the latent domain.
Fully Learnable Model for Task-Driven Image Compressed Sensing
A fully learnable model for task-driven image-compressed sensing (FLCS), based on Deep Convolution Generative Adversarial Networks and Variational Auto-encoder, that could significantly improve the reconstructed images’ quality while decreasing the running time is proposed.
Scalable Deep Compressive Sensing
A general framework named scalable deep compressive sensing (SDCS) for the scalable sampling and reconstruction (SSR) of all existing end-to-end-trained models is developed and experimental results show that models with SDCS can achieve SSR without changing their structure while maintaining good performance.
Deep Wavelet Architecture for Compressive sensing Recovery
This work proposes a Deep wavelet based compressive sensing with multi-resolution framework that provides better improvement in reconstruction as well as run time and demonstrates outstanding quality on test functions over previous approaches.
Model-Based Deep Learning for One-Bit Compressive Sensing
This work develops hybrid model-based deep learning architectures based on the deep unfolding methodology that have the ability to adaptively learn the proper quantization thresholds, paving the way for amplitude recovery in one-bit compressive sensing.
Learning Generative Prior with Latent Space Sparsity Constraints
This work considers a comparison between linear and nonlinear sensing mechanisms on Fashion-MNIST dataset and shows that the learned nonlinear version is superior to the linear one and derive the sample complexity bounds within the SDLSS framework for the linear measurement model.
Provable Compressed Sensing with Generative Priors via Langevin Dynamics
This paper introduces the use of stochastic gradient Langevin dynamics (SGLD) for compressed sensing with a generative prior and proves the convergence of SGLD to the true signal.
Non-Iterative Recovery from Nonlinear Observations using Generative Models
This paper aims to estimate the direction of an un-derlying signal from its nonlinear observations following the semi-parametric single index model (SIM), and shows that the non-iterative method significantly outperforms a state-of-the-art iterative method in terms of both accuracy and efficiency.


Modeling Sparse Deviations for Compressed Sensing using Generative Models
Sarse-Gen is proposed, a framework that allows for sparse deviations from the support set, thereby achieving the best of both worlds by using a domain specific prior and allowing reconstruction over the full space of signals.
Compressed Sensing using Generative Models
This work shows how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all, and proves that, if G is L-Lipschitz, then roughly O(k log L) random Gaussian measurements suffice for an l2/l2 recovery guarantee.
ConvCSNet: A Convolutional Compressive Sensing Framework Based on Deep Learning
Experimental results show that the proposed convolutional CS framework substantially outperforms previous state-of-the-art CS methods in term of both PSNR and visual quality.
Learning Compressed Sensing
It is shown that the optimal projections are in general not the principal components nor the independent components of the data, but rather a seemingly novel set of projections that capture what is still uncertain about the signal, given the training set.
Deep ADMM-Net for Compressive Sensing MRI
Experiments on MRI image reconstruction under different sampling ratios in k-space demonstrate that the proposed novel ADMM-Net algorithm significantly improves the baseline ADMM algorithm and achieves high reconstruction accuracies with fast computational speed.
Compressed sensing
  • D. Donoho
  • Mathematics
    IEEE Transactions on Information Theory
  • 2006
It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Sparse MRI: The application of compressed sensing for rapid MR imaging
Practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference and demonstrate improved spatial resolution and accelerated acquisition for multislice fast spin‐echo brain imaging and 3D contrast enhanced angiography.
Improved Techniques for Training GANs
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.
Optimizing the Latent Space of Generative Networks
Generative Latent Optimization (GLO), a framework to train deep convolutional generators using simple reconstruction losses, and enjoys many of the desirable properties of GANs: synthesizing visually-appealing samples, interpolating meaningfully between samples, and performing linear arithmetic with noise vectors; all of this without the adversarial optimization scheme.
The GAN Landscape: Losses, Architectures, Regularization, and Normalization
This work reproduces the current state of the art of GANs from a practical perspective, discusses common pitfalls and reproducibility issues, and goes beyond fairly exploring the GAN landscape.