DiamondGAN: Unified Multi-Modal Generative Adversarial Networks for MRI Sequences Synthesis

@article{Li2019DiamondGANUM,
  title={DiamondGAN: Unified Multi-Modal Generative Adversarial Networks for MRI Sequences Synthesis},
  author={Hongwei Li and Johannes C. Paetzold and Anjany Kumar Sekuboyina and Florian Kofler and Jianguo Zhang and Jan Stefan Kirschke and Benedikt Wiestler and Bjoern H. Menze},
  journal={ArXiv},
  year={2019},
  volume={abs/1904.12894}
}
Synthesizing MR imaging sequences is highly relevant in clinical practice, as single sequences are often missing or are of poor quality (e.g. due to motion). Naturally, the idea arises that a target modality would benefit from multi-modal input, as proprietary information of individual modalities can be synergistic. However, existing methods fail to scale up to multiple non-aligned imaging modalities, facing common drawbacks of complex imaging sequences. We propose a novel, scalable and multi… 
Multimodal MRI Synthesis Using Unified Generative Adversarial Networks.
TLDR
The results demonstrate that the proposed method is able to accurately synthesize multimodal MR images from a single MR image, and shows high accuracy and robustness for image synthesis with arbitrary MRI modality that is available in the database as input.
Prior-Guided Image Reconstruction for Accelerated Multi-Contrast MRI via Generative Adversarial Networks
TLDR
A new approach for synergistic recovery of undersampled multi-contrast acquisitions based on conditional generative adversarial networks is proposed, which mitigates the limitations of pure learning-based reconstruction or synthesis by utilizing three priors: shared high-frequency prior available in the source contrast to preserve high-spatial-frequency details, low-frequencyPrior available inThe undersampling target contrast to prevent feature leakage/loss, and perceptual prior to improve recovery of high-level features.
One Model to Synthesize Them All: Multi-contrast Multi-scale Transformer for Missing Data Imputation
TLDR
A multi-contrast multi-scale Transformer (MMT), which can take any subset of input contrasts and syn- thesize those that are missing, and outperforms the state-of-the-art methods quantitatively and qualitatively.
METGAN: Generative Tumour Inpainting and Modality Synthesis in Light Sheet Microscopy
TLDR
This paper introduces a novel generative method which leverages real anatomical information to generate realistic image-label pairs of tumours, and trains segmentation networks on a dataset augmented with the synthetic data, substantially improving the segmentation over the baseline.
Micro-CT Synthesis and Inner Ear Super Resolution via Bayesian Generative Adversarial Networks
TLDR
This paper addresses super resolution problem in a real-world scenario using unpaired data and synthesize linearly eight times higher resolved Micro-CT images of temporal bone structure, which is embedded in the inner ear, which can provide structural information of the temporal bone.
DS3-Net: Difficulty-perceived Common-to-T1ce Semi-Supervised Multimodal MRI Synthesis Network
TLDR
A Difficulty-perceived common-to-T1ce Semi-Supervised multimodal MRI Synthesis network (DS-Net), involving both paired and unpaired data together with dual-level knowledge distillation, that outperforms its supervised counterpart in each respect.
A Survey of Cross-Modality Brain Image Synthesis
TLDR
This paper provides in-depth analysis of how cross-modality brain image synthesis can improve the performance of different downstream tasks and evaluates the challenges and highlight several open challenges.
AutoSyncoder: An Adversarial AutoEncoder Framework for Multimodal MRI Synthesis
TLDR
This work proposes an efficient, multiresolution encoder-decoder network trained like an autoencoder that can predict missed inputs at the output that can help in avoiding the acquisition of redundant information, thereby saving time.
...
1
2
3
4
...

References

SHOWING 1-10 OF 21 REFERENCES
Missing MRI Pulse Sequence Synthesis Using Multi-Modal Generative Adversarial Network
TLDR
A variant of generative adversarial network capable of leveraging redundant information contained within multiple available sequences in order to generate one or more missing sequences for a patient scan is proposed and compared with competing unimodal and multi-modal methods.
Image Synthesis in Multi-Contrast MRI With Conditional Generative Adversarial Networks
TLDR
The proposed approach preserves intermediate-to-high frequency details via an adversarial loss, and it offers enhanced synthesis performance via pixel-wise and perceptual losses for registered multi-contrast images and a cycle-consistency loss for unregistered images.
Generative Adversarial Training for MRA Image Synthesis Using Multi-Contrast MRI
TLDR
This paper presents a generative adversarial network (GAN) based technique to generate MRA from T1-weighted and T2- Weighted MRI images, for the first time to the authors' knowledge and designs a loss term dedicated to a faithful reproduction of vascularities.
Generative Adversarial Networks for Image-to-Image Translation on Multi-Contrast MR Images - A Comparison of CycleGAN and UNIT
TLDR
It is shown that the implemented GAN models can synthesize visually realistic MR images (incorrectly labeled as real by a human) and it is also shown that models producing more visually realistic synthetic images not necessarily have better quantitative error measurements, when compared to ground truth data.
Deep MR to CT Synthesis Using Unpaired Data
TLDR
This work proposes to train a generative adversarial network (GAN) with unpaired MR and CT images to synthesize CT images that closely approximate reference CT images, and was able to outperform a GAN model trained with paired MR andCT images.
Cross-Modality Synthesis from CT to PET using FCN and GAN Networks for Improved Automated Lesion Detection
StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation
TLDR
A unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network, which leads to StarGAN's superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain.
Learning from Simulated and Unsupervised Images through Adversarial Training
TLDR
This work develops a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors, and makes several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training.
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
The architecture introduced in this paper learns a mapping function G : X 7→ Y using an adversarial loss such thatG(X) cannot be distinguished from Y , whereX and Y are images belonging to two
...
1
2
3
...