Corpus ID: 218571148

Compressive sensing with un-trained neural networks: Gradient descent finds the smoothest approximation

@article{Heckel2020CompressiveSW,
  title={Compressive sensing with un-trained neural networks: Gradient descent finds the smoothest approximation},
  author={Reinhard Heckel and M. Soltanolkotabi},
  journal={ArXiv},
  year={2020},
  volume={abs/2005.03991}
}
Un-trained convolutional neural networks have emerged as highly successful tools for image recovery and restoration. They are capable of solving standard inverse problems such as denoising and compressive sensing with excellent results by simply fitting a neural network model to measurements from a single image or signal without the need for any additional training data. For some applications, this critically requires additional regularization in the form of early stopping the optimization. For… Expand
Compressed Sensing for Photoacoustic Computed Tomography Using an Untrained Neural Network
Robust compressed sensing using generative models
Deep generative demixing: Recovering Lipschitz signals from noisy subgaussian mixtures
  • A. Berk
  • Computer Science, Mathematics
  • ArXiv
  • 2020
Robust compressed sensing of generative models
Non-Convex Compressed Sensing with Training Data
  • G. Welper
  • Computer Science, Mathematics
  • ArXiv
  • 2021
Interpolating Classifiers Make Few Mistakes
...
1
2
...

References

SHOWING 1-10 OF 43 REFERENCES
Regularizing linear inverse problems with convolutional neural networks
Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators
Compressed Sensing with Deep Image Prior and Learned Regularization
Algorithmic Guarantees for Inverse Imaging with Untrained Network Priors
Deep Denoising: Rate-Optimal Recovery of Structured Signals with a Deep Prior
Deep Decoder: Concise Image Representations from Untrained Non-convolutional Networks
Deep Image Prior
Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
...
1
2
3
4
5
...