Solving 3D Inverse Problems using Pre-trained 2D Diffusion Models

  title={Solving 3D Inverse Problems using Pre-trained 2D Diffusion Models},
  author={Hyungjin Chung and Dohoon Ryu and Michael T. McCann and Marc Louis Klasky and J. C. Ye},
Diffusion models have emerged as the new state-of-the-art generative model with high quality samples, with intriguing properties such as mode coverage and high flexibil-ity. They have also been shown to be effective inverse problem solvers, acting as the prior of the distribution, while the information of the forward model can be granted at the sampling stage. Nonetheless, as the generative process remains in the same high dimensional (i.e. identical to data dimension) space, the models have not… 



Score-based diffusion models for accelerated MRI

Improving Diffusion Models for Inverse Problems using Manifold Constraints

This work proposes an additional correction term inspired by the manifold constraint, which can be used synergistically with the previous solvers to make the iterations close to the manifold, and boosts the performance by a surprisingly large margin.

A Diffusion Model Predicts 3D Shapes from 2D Microscopy Images

It is demonstrated that diffusion models can be applied to inverse problems in 3D, and that they learn to re-construct 3D shapes with realistic morphological features from 2D microscopy images.

Solving Inverse Problems in Medical Imaging with Score-Based Generative Models

A score-based generative model on medical images to capture their prior distribution and a sampling method to reconstruct an image consistent with both the prior and the observed measurements is introduced.

DuDoRNet: Learning a Dual-Domain Recurrent Network for Fast MRI Reconstruction With Deep T1 Prior

  • Bo ZhouS. K. Zhou
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
A Dual Domain Recurrent Network (DuDoRNet) with deep T1 prior embedded to simultaneously recover k-space and images for accelerating the acquisition of MRI with a long imaging protocol and is customized for dual domain restorations from undersampled MRI data.

Brain Imaging Generation with Latent Diffusion Models

This study explores using Latent Diffusion Models to generate synthetic images from high-resolution 3D brain images, and found that their models created realistic data, and they could use the conditioning variables to control the data generation eficively.

Image Prediction for Limited-angle Tomography via Deep Learning with Convolutional Neural Network

A data-driven learning-based method is proposed based on a deep convolutional neural network that provides a simple and efficient approach for improving image quality of the reconstruction results from limited projection data.

ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models

This work proposes Iterative Latent Variable Refinement (ILVR), a method to guide the generative process in DDPM to generate high-quality images based on a given reference image, which allows adaptation of a single DDPM without any additional learning in various image generation tasks.

3D Shape Generation and Completion through Point-Voxel Diffusion

Point-Voxel Diffusion is a unified, probabilistic formulation for unconditional shape generation and conditional, multi-modal shape completion that marries denoising diffusion models with the hybrid, pointvoxel representation of 3D shapes.

High-Resolution Image Synthesis with Latent Diffusion Models

These latent diffusion models achieve new state of the art scores for image inpainting and class-conditional image synthesis and highly competitive performance on various tasks, including unconditional image generation, text-to-image synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs.