Distribution Matching Losses Can Hallucinate Features in Medical Image Translation

  title={Distribution Matching Losses Can Hallucinate Features in Medical Image Translation},
  author={Joseph Paul Cohen and Margaux Luck and Sina Honari},
This paper discusses how distribution matching losses, such as those used in CycleGAN, when used to synthesize medical images can lead to mis-diagnosis of medical conditions. It seems appealing to use these new image synthesis methods for translating images from a source to a target domain because they can produce high quality images and some even do not require paired data. However, the basis of how these image translation models work is through matching the translation output to the… 

3C-GAN: class-consistent CycleGAN for malaria domain adaptation model

A modified distribution matching loss for CycleGAN is introduced to eliminate feature hallucination on the malaria dataset and it is believed that this approach will expedite the process of developing unsupervised unpaired GAN that is safe for clinical use.

Projected Distribution Loss for Image Enhancement

It is demonstrated that aggregating 1D-Wasserstein distances between CNN activations is more reliable than the existing approaches, and it can significantly improve the perceptual performance of enhancement models.

Towards semi-supervised segmentation via image-to-image translation

This work proposes a semi-supervised framework that employs image-to-image translation between weak labels (e.g., presence vs. absence of cancer) in addition to fully supervised segmentation on some examples, and re-use the encoder and decoders for translating in either direction between two domains, employing a strategy of selectively decoding domain-specific variations.

Harmonic Unpaired Image-to-image Translation

This paper develops HarmonicGAN to learn bi-directional translations between the source and the target domains, and turns CycleGAN from a failure to a success, halving the mean-squared error, and generating images that radiologists prefer over competing methods in 95% of cases.

Deep learning-based bias transfer for overcoming laboratory differences of microscopic images

Evaluating, comparing, and improving existing generative model architectures to overcome domain shifts for immunofluorescence (IF) and Hematoxylin and Eosin (H&E) stained microscopy images and adapting the bias of the samples significantly improved the pixel-level segmentation for human kidney glomeruli and podocytes and improved the classification accuracy for human prostate biopsies.

Medical Image Generation using Generative Adversarial Networks

This chapter provides state-of-the-art progress in GANs-based clinical application in medical image generation, and cross-modality synthesis, and future research directions in the area have been covered.

Galaxy Image Translation with Semi-supervised Noise-reconstructed Generative Adversarial Networks

This work proposes a two-way image translation model using GANs that exploits both paired and unpaired images in a semi-supervised manner, and introduces a noise emulating module that is able to learn and reconstruct noise characterized by high-frequency features.



Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks

This work presents an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples, and introduces a cycle consistency loss to push F(G(X)) ≈ X (and vice versa).

Image-to-Image Translation with Conditional Adversarial Networks

Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.

Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks

The architecture introduced in this paper learns a mapping function G : X 7→ Y using an adversarial loss such thatG(X) cannot be distinguished from Y , whereX and Y are images belonging to two

Unsupervised Image-to-Image Translation Networks

This work makes a shared-latent space assumption and proposes an unsupervised image-to-image translation framework based on Coupled GANs that achieves state-of-the-art performance on benchmark datasets.

Medical Image Synthesis with Context-Aware Generative Adversarial Networks

A fully convolutional network is trained to generate CT given the MR image to better model the nonlinear mapping from MRI to CT and produce more realistic images, and an image-gradient-difference based loss function is proposed to alleviate the blurriness of the generated CT.

Towards Virtual H&E Staining of Hyperspectral Lung Histology Images Using Conditional Generative Adversarial Networks

A method to virtually stain unstained Hematoxylin and eosin (H&E) specimens using dimension reduction and conditional adversarial generative networks (cGANs) which build highly non-linear mappings between input and output images.

Deep MR to CT Synthesis Using Unpaired Data

This work proposes to train a generative adversarial network (GAN) with unpaired MR and CT images to synthesize CT images that closely approximate reference CT images, and was able to outperform a GAN model trained with paired MR andCT images.

DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction

This paper provides a deep learning-based strategy for reconstruction of CS-MRI, and bridges a substantial gap between conventional non-learning methods working only on data from a single image, and prior knowledge from large training data sets.

Compressed Sensing MRI Reconstruction Using a Generative Adversarial Network With a Cyclic Loss

It is demonstrated that the proposed novel deep learning-based generative adversarial model, RefineGAN, outperforms the state-of-the-art CS-MRI methods by a large margin in terms of both running time and image quality via evaluation using several open-source MRI databases.

Virtual PET Images from CT Data Using Deep Convolutional Networks: Initial Results

A novel system for PET estimation using CT scans using fully convolutional networks (FCN) and conditional generative adversarial networks (GAN) to export PET data from CT data is presented.