• Corpus ID: 230649394

Generator Surgery for Compressed Sensing

  title={Generator Surgery for Compressed Sensing},
  author={Niklas Smedemark-Margulies and Jung Yeon Park and Max Daniels and Rose Yu and J.-W. van de Meent and Paul Hand},
Image recovery from compressive measurements requires a signal prior for the images being reconstructed. Recent work has explored the use of deep generative models with low latent dimension as signal priors for such problems. However, their recovery performance is limited by high representation error. We introduce a method for achieving low representation error using generators as signal priors. Using a pre-trained generator, we remove one or more initial blocks at test time and optimize over… 

Figures and Tables from this paper

Regularized Training of Intermediate Layers for Generative Models for Inverse Problems
A new regularized GAN training algorithm is introduced and it is demonstrated that the learned generative model results in lower reconstruction errors across a wide range of under sampling ratios when solving compressed sensing, inpainting, and super-resolution problems.
Optimizing Intermediate Representations of Generative Models for Phase Retrieval
A novel variation of intermediate layer optimization (ILO) is leveraged to extend the range of the generator while still producing images consistent with the training data, and new initialization schemes are introduced that further improve the quality of the reconstruction.
Intermediate Layer Optimization for Inverse Problems using Deep Generative Models
This work proposes Intermediate Layer Optimization (ILO), a novel optimization algorithm for solving inverse problems with deep generative models that outperforms state-of-the-art methods introduced in StyleGAN-2 and PULSE for a wide range of inverse problems.
Score-Guided Intermediate Layer Optimization: Fast Langevin Mixing for Inverse Problems
The framework, SGILO, extends prior work by replacing the sparsity regularization with a generative prior in the intermediate layer by training a score-based model in the latent space of a StyleGAN-2 and using it to solve inverse problems.


Image-Adaptive GAN based Reconstruction
This paper suggests to mitigate the limited representation capabilities of generators by making them image-adaptive and enforcing compliance of the restoration with the observations via back-projections and empirically demonstrates the advantages of the proposed approach for image super-resolution and compressed sensing.
Image Processing Using Multi-Code GAN Prior
A novel approach is proposed, called mGANprior, to incorporate the well-trained GANs as effective prior to a variety of image processing tasks, by employing multiple latent codes to generate multiple feature maps at some intermediate layer of the generator and composing them with adaptive channel importance to recover the input image.
Compressed Sensing using Generative Models
This work shows how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all, and proves that, if G is L-Lipschitz, then roughly O(k log L) random Gaussian measurements suffice for an l2/l2 recovery guarantee.
Reducing the Representation Error of GAN Image Priors Using the Deep Decoder
A method for reducing the representation error of GAN priors by modeling images as the linear combination of a GAN prior with a Deep Decoder, an underparameterized and most importantly unlearned natural signal model similar to the Deep Image Prior.
Invertible generative models for inverse problems: mitigating representation error and dataset bias
It is demonstrated that invertible neural networks, which have zero representation error by design, can be effective natural signal priors at inverse problems such as denoising, compressive sensing, and inpainting.
Deep Image Prior
It is shown that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, super-resolution, and inpainting.
Large Scale GAN Training for High Fidelity Natural Image Synthesis
It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input.
Inverting the Generator of a Generative Adversarial Network
This paper introduces a technique, inversion, to project data samples, specifically images, to the latent space using a pretrained GAN, and demonstrates how the proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets.
Deep Decoder: Concise Image Representations from Untrained Non-convolutional Networks
This paper proposes an untrained simple image model, called the deep decoder, which is a deep neural network that can generate natural images from very few weight parameters, with a simple architecture with no convolutions and fewer weight parameters than the output dimensionality.
Latent Convolutional Models
The new model provides a strong and universal image prior for a variety of image restoration tasks such as large-hole inpainting, superresolution, and colorization and outperforms the competing approaches over a range of restoration tasks.