Corpus ID: 208076625

Solving Inverse Problems by Joint Posterior Maximization with a VAE Prior

  title={Solving Inverse Problems by Joint Posterior Maximization with a VAE Prior},
  author={Mario Gonz'alez and Andr{\'e}s Almansa and Mauricio Delbracio and Pablo Mus'e and Pauline Tan},
In this paper we address the problem of solving ill-posed inverse problems in imaging where the prior is a neural generative model. Specifically we consider the decoupled case where the prior is trained once and can be reused for many different log-concave degradation models without retraining. Whereas previous MAP-based approaches to this problem lead to highly non-convex optimization algorithms, our approach computes the joint (space-latent) MAP that naturally leads to alternate optimization… Expand
Regularization via deep generative models: an analysis point of view
This paper proposes a new way of regularizing an inverse problem in imaging by means of a deep generative neural network, where the estimation is performed on the latent vector, the solution being obtained afterwards via the decoder. Expand
Regularising Inverse Problems with Generative Machine Learning Models
It is shown that the success of solutions restricted to lie exactly in the range of the generator is highly dependent on the ability of the generative model but that allowing small deviations from the rangeof the generator produces more consistent results. Expand
Learning local regularization for variational image restoration
A framework to learn a local regularization model for solving general image restoration problems is proposed with a fully convolutional neural network that sees the image through a receptive field corresponding to small image patches using a Wasserstein generative adversarial networks based energy. Expand
Bayesian Imaging With Data-Driven Priors Encoded by Neural Networks: Theory, Methods, and Algorithms
This paper proposes a new methodology for performing Bayesian inference in imaging inverse problems where the prior knowledge is available in the form of training data, and derives a model misspecification test to automatically detect situations where the data-driven prior is unreliable. Expand
Solving Bayesian Inverse Problems via Variational Autoencoders.
This work introduces UQ-VAE: a flexible, adaptive, hybrid data/model-informed framework for training neural networks capable of rapid modelling of the posterior distribution representing the unknown parameter of interest and includes an adjustable hyperparameter that allows selection of the notion of distance between the posterior model and the target distribution. Expand


Unrolled Optimization with Deep Priors
This paper presents unrolled optimization with deep priors, a principled framework for infusing knowledge of the image formation into deep networks that solve inverse problems in imaging, inspired by classical iterative methods. Expand
Solving Linear Inverse Problems Using Gan Priors: An Algorithm with Provable Guarantees
  • Viraj Shah, C. Hegde
  • Computer Science, Mathematics
  • 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2018
This work proposes a projected gradient descent (PGD) algorithm for effective use of GAN priors for linear inverse problems, and provides theoretical guarantees on the rate of convergence of this algorithm. Expand
Neumann Networks for Inverse Problems in Imaging
An end-to-end, data-driven method of solving inverse problems inspired by the Neumann series, which is called a Neumann network and outperforms traditional inverse problem solution methods, model-free deep learning approaches, and state-of-the-art unrolled iterative methods on standard datasets. Expand
Learning Proximal Operators: Using Denoising Networks for Regularizing Inverse Imaging Problems
This paper studies the possibility of replacing the proximal operator of the regularization used in many convex energy minimization algorithms by a denoising neural network, and obtains state-of-the-art reconstruction results. Expand
Auto-Encoding Variational Bayes
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced. Expand
Learning Deep CNN Denoiser Prior for Image Restoration
Experimental results demonstrate that the learned set of denoisers can not only achieve promising Gaussian denoising results but also can be used as prior to deliver good performance for various low-level vision applications. Expand
A Bayesian Hyperprior Approach for Joint Image Denoising and Interpolation, With an Application to HDR Imaging
This paper proposes the use of a hyperprior to model image patches, in order to stabilize the estimation procedure and provides an application to high dynamic range imaging from a single image taken with a modified sensor, which shows the effectiveness of the proposed scheme. Expand
CNN-Based Projected Gradient Descent for Consistent CT Image Reconstruction
A relaxed version of PGD wherein gradient descent enforces measurement consistency, while a CNN recursively projects the solution closer to the space of desired reconstruction images and shows an improvement over total variation-based regularization, dictionary learning, and a state-of-the-art deep learning-based direct reconstruction technique. Expand
Scene-Adapted Plug-and-Play Algorithm with Guaranteed Convergence: Applications to Data Fusion in Imaging
This paper proposes a PnP approach where a scene-adapted prior is plugged into ADMM (alternating direction method of multipliers), and proves convergence of the resulting algorithm. Expand
Image Restoration using Autoencoding Priors
This work builds on the key observation that the output of an optimal denoising autoencoder is a local mean of the true data density, and uses the magnitude of this mean shift vector as the negative log likelihood of the natural image prior for image restoration. Expand