Stochastic Image Denoising by Sampling from the Posterior Distribution

  title={Stochastic Image Denoising by Sampling from the Posterior Distribution},
  author={Bahjat Kawar and Gregory Vaksman and Michael Elad},
  journal={2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)},
Image denoising is a well-known and well studied problem, commonly targeting a minimization of the mean squared error (MSE) between the outcome and the original image. Unfortunately, especially for severe noise levels, such Minimum MSE (MMSE) solutions may lead to blurry output images. In this work we propose a novel stochastic denoising approach that produces viable and high perceptual quality results, while maintaining a small MSE. Our method employs Langevin dynamics that relies on a… 

Figures and Tables from this paper

Posterior Sampling for Image Restoration using Explicit Patch Priors

This paper shows how to combine explicit priors on patches of natural images in order to sample from the posterior probability of a full image given a degraded image, and proves that the algorithm generates correct samples from the distribution.

Bayesian imaging using Plug & Play priors: when Langevin meets Tweedie

Detailed convergence guarantees are established for two algorithms: PnPULA and PnP-SGD (Plug & Play Stochastic Gradient Descent) for Monte Carlo sampling and MMSE inference, which approximately target a decision-theoretically optimal Bayesian model that is well-posed.

Image Denoising: The Deep Learning Revolution and Beyond - A Survey Paper -

A broad view of the history of the field of image denoising and closely related topics in image processing is provided, to give a better context to recent discoveries, and to the influence of the AI revolution in this domain.

SNIPS: Solving Noisy Inverse Problems Stochastically

A novel stochastic algorithm dubbed SNIPS, which draws samples from the posterior distribution of any linear inverse problem, where the observation is assumed to be contaminated by additive white Gaussian noise, is introduced.

MR Image Denoising and Super-Resolution Using Regularized Reverse Diffusion

This work proposes a new denoising method based on score-based reverse diffusion sampling, which overcomes all the aforementioned drawbacks and establishes state-of-the-art performance, while having desirable properties which prior MMSE denoisers did not have.

Towards A Most Probable Recovery in Optical Imaging

Light is a complex-valued field. The intensity and phase of the field are affected by imaged objects. However, imaging sensors measure only real-valued non-negative intensities. This results in a

Denoising Diffusion Restoration Models

DDRM takes advantage of a pre-trained denoising diffusion generative model for solving any linear inverse problem, and outperforms the current leading unsupervised methods on the diverse ImageNet dataset in reconstruction quality, perceptual quality, and runtime.

Neural Volume Super-Resolution

This work proposes a neural super-resolution network that operates directly on the volumetric representation of the scene, and validates the proposed method’s capability of super-resolving multi-view consistent views both quantitatively and qualitatively on a diverse set of unseen 3D scenes, demonstrating aSignificant advantage over existing approaches.

Reasons for the Superiority of Stochastic Estimators over Deterministic Ones: Robustness, Consistency and Perceptual Quality

This paper proves that any restoration algorithm that attains perfect perceptual quality and whose outputs are consistent with the input must be a posterior sampler, and is thus required to be stochastic, and illustrates that while deterministic restoration algorithms may attain high perceptual quality, this can be achieved only by filling up the space of all possible source images using an extremely sensitive mapping.

Boomerang: Local sampling on image manifolds using diffusion models

This work introduces Boomerang, a local image manifold sampling approach using the dynamics of diffusion models, and provides a framework for constructing privacy-preserving datasets having controllable degrees of anonymity.



A Nonlocal Bayesian Image Denoising Algorithm

A simple patch-based Bayesian method is proposed, which on the one hand keeps most interesting features of former methopping methods and on the other hand unites the transform thresholding method and a Markovian Bayesian estimation.

Stochastic image denoising based on Markov-chain Monte Carlo sampling

Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising

This paper investigates the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image Denoising, and uses residual learning and batch normalization to speed up the training process as well as boost theDenoising performance.

Is Denoising Dead?

This work estimates a lower bound on the mean squared error of the denoised result and compares the performance of current state-of-the-art denoising methods with this bound, showing that despite the phenomenal recent progress in the quality of denoizing algorithms, some room for improvement still remains for a wide class of general images, and at certain signal-to-noise levels.

Solving Linear Inverse Problems Using the Prior Implicit in a Denoiser

This work relies on a little-known statistical result due to Miyasawa (1961), who showed that the least-squares solution for removing additive Gaussian noise can be written directly in terms of the gradient of the log of the noisy signal density, to develop a stochastic coarse-to-fine gradient ascent procedure for drawing high-probability samples from the implicit prior embedded within a CNN.

The Little Engine That Could: Regularization by Denoising (RED)

This paper provides an alternative, more powerful, and more flexible framework for achieving Regularization by Denoising (RED): using the denoising engine in defining the regulariza...

Patch Complexity, Finite Pixel Correlations and Optimal Denoising

A law of diminishing return is presented, namely that with increasing patch size, rare patches not only require a much larger dataset, but also gain little from it, and this result suggests novel adaptive variable-sized patch schemes for denoising.

Towards theoretically-founded learning-based denoising

  • Wenda ZhouS. Jalali
  • Computer Science
    2019 IEEE International Symposium on Information Theory (ISIT)
  • 2019
Both for memoryless sources, and for structured first-order Markov sources, it is shown that, asymptotically, as σ2 (noise variance) converges to zero, E[Xn|Yn] converged to the information dimension of the source, and this limit is known to be optimal.

Denoising Diffusion Probabilistic Models

High quality image synthesis results are presented using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics, which naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding.

The Unreasonable Effectiveness of Deep Features as a Perceptual Metric

A new dataset of human perceptual similarity judgments is introduced and it is found that deep features outperform all previous metrics by large margins on this dataset, and suggests that perceptual similarity is an emergent property shared across deep visual representations.