Learning to invert: Signal recovery via Deep Convolutional Networks

@article{Mousavi2017LearningTI,
  title={Learning to invert: Signal recovery via Deep Convolutional Networks},
  author={Ali Mousavi and Richard Baraniuk},
  journal={2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  year={2017},
  pages={2272-2276}
}
  • A. MousaviRichard Baraniuk
  • Published 14 January 2017
  • Computer Science
  • 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
The promise of compressive sensing (CS) has been offset by two significant challenges. [] Key Method When trained on a set of representative images, the network learns both a representation for the signals (addressing challenge one) and an inverse map approximating a greedy or convex recovery algorithm (addressing challenge two). Our experiments indicate that the DeepInverse network closely approximates the solution produced by state-of-the-art CS recovery algorithms yet is hundreds of times faster in run…

Figures and Tables from this paper

Learned D-AMP: Principled Neural Network based Compressive Image Recovery

The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a state-evolution heuristic that accurately predicts its performance, which outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and run time.

Deep Learning Approach Based on Tensor-Train for Sparse Signal Recovery

The proposed TT-SDA network can preserve the reconstruction performance of the conventional SDA network and outperform the traditional methods, especially with low measurement rates, and it can also significantly reduce the computational complexity and occupied memory space, which becomes a time and memory efficient method in compressive sensing problem.

JR2net: A Joint Non-Linear Representation and Recovery Network for Compressive Spectral Imaging

This work proposes a joint non-linear representation and recovery network (JR2net), linking the representation and Recovery task into a single optimization problem, showing superiority over state-of-the-art methods.

Learning to Sense and Reconstruct A Class of Signals

Initial results show that the measurement matrix learned through the proposed technique provides higher peak signal to noise ratio (PSNR) levels compared to both randomly selected matrices or designed measurement matrices for an assumed sparsity basis for the dataset.

Learned D-AMP: A Principled CNN-based Compressive Image Recovery Algorithm

Novel neural network architectures that mimic the behavior of the denoising-based approximate message passing (D-AMP) and denoised-based vector approximate messagePassing algorithms are developed and outperform the state-of-the-art BM3d-AMP and NLR-CS algorithms in terms of both accuracy and runtime.

Convolutional Neural Networks for Noniterative Reconstruction of Compressively Sensed Images

This paper proposes a data-driven noniterative algorithm, ReconNet, which is a deep neural network learned end-to-end to map block-wise compressive measurements of the scene to the desired image blocks, and discusses how adding a fully connected layer to the existing ReconNet architecture allows for jointly learning the measurement matrix and the reconstruction algorithm in a single network.

Deep Coupled-Representation Learning for Sparse Linear Inverse Problems With Side Information

The first deep unfolding method with SI is introduced, which actually comes from a different modality, and is used to learn coupled representations of correlated signals from different modalities, enabling the recovery of multi-modal data at a low computational cost.

AMP-Inspired Deep Networks for Sparse Linear Inverse Problems

This paper proposes two novel neural-network architectures that decouple prediction errors across layers in the same way that the approximate message passing (AMP) algorithms decouple them across iterations: through Onsager correction.

FompNet: Compressive sensing reconstruction with deep learning over wireless fading channels

Experimental results show that FompNet outperforms existing reconstruction approaches in terms of distortion and computational complexity under various channel conditions.

Data Driven Measurement Matrix Learning for Sparse Reconstruction

Results show that the proposed technique provides higher peak signal to noise ratio (PSNR) levels and hence learns better measurement matrices than that of the randomly selected or specifically designed for a known sparsity basis to reduce average coherence.
...

References

SHOWING 1-10 OF 22 REFERENCES

ReconNet: Non-Iterative Reconstruction of Images from Compressively Sensed Measurements

A novel convolutional neural network architecture which takes in CS measurements of an image as input and outputs an intermediate reconstruction which is fed into an off-the-shelf denoiser to obtain the final reconstructed image, ReconNet.

ReconNet: Non-Iterative Reconstruction of Images from Compressively Sensed Random Measurements

A novel convolutional neural network architecture which takes in CS measurements of an image as input and outputs an intermediate reconstruction which is fed into an off-the-shelf denoiser to obtain the final reconstructed image, ReconNet.

From Denoising to Compressed Sensing

An extension of the approximate message passing (AMP) framework is developed, called denoising-based AMP (D-AMP), that can integrate a wide class of denoisers within its iterations, and demonstrates that, when used with a high-performance denoiser for natural images, D-AMP offers the state-of-the-art CS recovery performance while operating tens of times faster than competing methods.

Learning to Sense Sparse Signals: Simultaneous Sensing Matrix and Sparsifying Dictionary Optimization

A framework for the joint design and optimization, from a set of training images, of the nonparametric dictionary and the sensing matrix is introduced and it is shown that this joint optimization outperforms both the use of random sensing matrices and those matrices that are optimized independently of the learning of the dictionary.

CoSaMP: Iterative signal recovery from incomplete and inaccurate samples

This extended abstract describes a recent algorithm, called, CoSaMP, that accomplishes the data recovery task and was the first known method to offer near-optimal guarantees on resource usage.

A Probabilistic Framework for Deep Learning

It is demonstrated that max-sum inference in the DRMM yields an algorithm that exactly reproduces the operations in deep convolutional neural networks (DCNs), providing a first principles derivation.

Message-passing algorithms for compressed sensing

A simple costless modification to iterative thresholding is introduced making the sparsity–undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures, inspired by belief propagation in graphical models.

Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

If the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program.

K-SVD : An Algorithm for Designing of Overcomplete Dictionaries for Sparse Representation

A novel algorithm for adapting dictionaries in order to achieve sparse signal representations, K-SVD, an iterative method that alternates between sparse coding of the examples based on the current dictionary, and a process of updating the dictionary atoms to better fit the data.

Compressive sampling

Some of the key mathematical insights underlying this new sampling theory are provided, and some of the interactions between compressive sampling and other fields such as statistics, information theory, coding theory, and theoretical computer science are explained.