Bridging the Gap Between Paired and Unpaired Medical Image Translation

  title={Bridging the Gap Between Paired and Unpaired Medical Image Translation},
  author={Pauliina Paavilainen and Saad Ullah Akram and Juho Kannala},
Medical image translation has the potential to reduce the imaging workload, by removing the need to capture some sequences, and to reduce the annotation burden for developing machine learning methods. GANs have been used successfully to translate images from one domain to another, such as MR to CT. At present, paired data (registered MR and CT images) or extra supervision (e.g. segmentation masks) is needed to learn good translation models. Registering multiple modalities or annotating… 

Figures and Tables from this paper

Optical to Planar X-ray Mouse Image Mapping in Preclinical Nuclear Medicine Using Conditional Adversarial Networks
In the current work, a pix2pix conditional generative adversarial network has been evaluated as a potential solution for generating adequately accurate synthesized morphological X-ray images by


Deep CT to MR Synthesis Using Paired and Unpaired Data
Qualitative and quantitative comparisons against independent paired and unpaired training methods demonstrated the superiority of the proposed approach, which alleviates the rigid registration of paired training, and overcomes the context-misalignment problem of unpairedTraining.
Deep MR to CT Synthesis Using Unpaired Data
This work proposes to train a generative adversarial network (GAN) with unpaired MR and CT images to synthesize CT images that closely approximate reference CT images, and was able to outperform a GAN model trained with paired MR andCT images.
Unsupervised Medical Image Translation Using Cycle-MedGAN
A new unsupervised translation framework which is titled Cycle-MedGAN is proposed which utilizes new non-adversarial cycle losses which direct the framework to minimize the textural and perceptual discrepancies in the translated images.
Generative Adversarial Networks for Image-to-Image Translation on Multi-Contrast MR Images - A Comparison of CycleGAN and UNIT
It is shown that the implemented GAN models can synthesize visually realistic MR images (incorrectly labeled as real by a human) and it is also shown that models producing more visually realistic synthetic images not necessarily have better quantitative error measurements, when compared to ground truth data.
Generative Adversarial Networks for MR-CT Deformable Image Registration
State-of-the-art DIR methods based on Normalized Mutual Information, Modality Independent Neighborhood Descriptor and their novel combination achieved a mean segmentation overlap ratio of 76.7%, which dropped to 69.1% or less when registering images synthesized by cycle-GAN based on local correlation, due to the poor performance on the thoracic region, where large lung volume changes were synthesized.
Translating and Segmenting Multimodal Medical Volumes with Cycle- and Shape-Consistency Generative Adversarial Network
This work proposes a generic cross-modality synthesis approach and shows that these goals can be achieved with an end-to-end 3D convolutional neural network (CNN) composed of mutually-beneficial generators and segmentors for image synthesis and segmentation tasks.
MedGAN: Medical Image Translation using GANs
A new framework, named MedGAN, is proposed for medical image-to-image translation which operates on the image level in an end- to-end manner and outperforms other existing translation approaches.
Medical Image Synthesis with Context-Aware Generative Adversarial Networks
A fully convolutional network is trained to generate CT given the MR image to better model the nonlinear mapping from MRI to CT and produce more realistic images, and an image-gradient-difference based loss function is proposed to alleviate the blurriness of the generated CT.
Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks
It is shown that in several CT segmentation tasks performance is improved significantly, especially in out-of-distribution (noncontrast CT) data, which will be valuable to medical imaging researchers to reduce manual segmentation effort and cost in CT imaging.
Generating synthetic CTs from magnetic resonance images using generative adversarial networks
A GAN model using a single T1-weighted MR image as the input that generates robust, high quality synCTs in seconds is developed and validated and offers strong potential for supporting near real-time MR-only treatment planning in the brain.