Diffusion Posterior Sampling for General Noisy Inverse Problems

@article{Chung2022DiffusionPS,
  title={Diffusion Posterior Sampling for General Noisy Inverse Problems},
  author={Hyungjin Chung and Jeongsol Kim and Michael T. McCann and Marc Louis Klasky and J. C. Ye},
  journal={ArXiv},
  year={2022},
  volume={abs/2209.14687}
}
Diffusion models have been recently studied as powerful generative inverse problem solvers, owing to their high quality reconstructions and the ease of combining existing iterative solvers. However, most works focus on solving simple linear inverse problems in noiseless settings, which significantly under-represents the complexity of real-world problems. In this work, we extend diffusion solvers to efficiently handle general noisy (non)linear inverse problems via approximation of the posterior… 

Diffusion Model Based Posterior Sampling for Noisy Linear Inverse Problems

An unsupervised general-purpose sampling approach called diffusion model based posterior sampling (DMPS) to reconstruct the unknown signal from noisy linear measurements and achieves highly competitive or even better performances on various tasks while being 3 times faster than the leading competitor.

Parallel Diffusion Models of Operator and Image for Blind Inverse Problems

Diffusion model-based inverse problem solvers have demonstrated state-of-the-art performance in cases where the forward operator is known (i.e. non-blind). However, the applicability of the method to

DOLCE: A Model-Based Probabilistic Diffusion Framework for Limited-Angle CT Reconstruction

DOLCE is presented, a new deep model-based framework for LACT that uses a conditional diffusion model as an image prior that achieves the SOTA performance on drastically different types of images.

Thompson Sampling with Diffusion Generative Prior

This work train a diffusion model that learns the underlying task distribution and combine Thompson sampling with the learned prior to deal with new tasks at test time, and proposes a posterior sampling algorithm that is designed to carefully balance between the learning prior and the noisy observations that come from the learner’s interaction with the environment.

Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model

The approaches reveal a promising new path toward solving IR tasks in zero-shots, as the data consistency is analytically guaranteed, and the realness of the results is confirmed.

A Survey on Generative Diffusion Model

A diverse range of advanced techniques to speed up the diffusion models – training schedule, training-free sampling, mixed-modeling, and score & diffusion unification are presented.

SINE: SINgle Image Editing with Text-to-Image Diffusion Models

A novel model-based guidance built upon the classifier-free guidance is proposed so that the knowledge from the model trained on a single image can be distilled into the pre-trained diffusion model, enabling content creation even with one given image.

Sequential Neural Score Estimation: Likelihood-Free Inference with Conditional Score Based Diffusion Models

This work introduces Sequential Neural Posterior Score Estimation (SNPSE and SNLSE), two new score-based methods for Bayesian inference in simulator-based models that leverage conditional score- based diffusion models to generate samples from the posterior distribution of interest.

DiffTalk: Crafting Diffusion Models for Generalized Talking Head Synthesis

Instead of employing audio signals as the single driving factor, the DiffTalk is investigated, and incorporate reference face images and landmarks as conditions for personality-aware generalized synthesis, and efficiently synthesizes high-fidelity audiodriven talking head videos for generalized novel identities.

Image Restoration with Mean-Reverting Stochastic Differential Equations

This paper presents a stochastic differential equation (SDE) approach for general-purpose image restoration that transforms a high-quality image into a degraded counterpart as a mean state with Gaussian noise and proposes a maximum likelihood objective to learn an optimal reverse trajectory which stabilizes the training and improves the restoration results.

References

SHOWING 1-10 OF 42 REFERENCES

Improving Diffusion Models for Inverse Problems using Manifold Constraints

This work proposes an additional correction term inspired by the manifold constraint, which can be used synergistically with the previous solvers to make the iterations close to the manifold, and boosts the performance by a surprisingly large margin.

Denoising Diffusion Restoration Models

DDRM takes advantage of a pre-trained denoising diffusion generative model for solving any linear inverse problem, and outperforms the current leading unsupervised methods on the diverse ImageNet dataset in reconstruction quality, perceptual quality, and runtime.

Come-Closer-Diffuse-Faster: Accelerating Conditional Diffusion Models for Inverse Problems through Stochastic Contraction

This work shows that starting from Gaussian noise is unnecessary, and proposes a new sampling strategy, dubbed Come-Closer-Diffuse-Faster (CCDF), which can achieve state-of-the-art reconstruction performance at significantly reduced sampling steps.

SNIPS: Solving Noisy Inverse Problems Stochastically

A novel stochastic algorithm dubbed SNIPS, which draws samples from the posterior distribution of any linear inverse problem, where the observation is assumed to be contaminated by additive white Gaussian noise, is introduced.

DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps

This work proposes DPM-Solver, a fast dedicated high-order solver for diffusion ODEs with the convergence order guarantee, suitable for both discrete-time and continuous-time DPMs without any further training.

Pseudo Numerical Methods for Diffusion Models on Manifolds

A fresh perspective that DDPMs should be treated as solving differential equations on manifolds is provided and pseudo numerical methods for diffusion models (PNDMs) are proposed, finding that the pseudo linear multi-step method is the best in most situations.

Fast Image Deconvolution using Hyper-Laplacian Priors

This paper describes a deconvolution approach that is several orders of magnitude faster than existing techniques that use hyper-Laplacian priors and is able to deconvolve a 1 megapixel image in less than ~3 seconds, achieving comparable quality to existing methods that take ~20 minutes.

Denoising Diffusion Implicit Models

Denoising diffusion implicit models (DDIMs) are presented, a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs that can produce high quality samples faster and perform semantically meaningful image interpolation directly in the latent space.

Denoising Diffusion Probabilistic Models

High quality image synthesis results are presented using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics, which naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding.

ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models

This work proposes Iterative Latent Variable Refinement (ILVR), a method to guide the generative process in DDPM to generate high-quality images based on a given reference image, which allows adaptation of a single DDPM without any additional learning in various image generation tasks.