Unpaired Multi-Modal Segmentation via Knowledge Distillation

@article{Dou2020UnpairedMS,
  title={Unpaired Multi-Modal Segmentation via Knowledge Distillation},
  author={Qi Dou and Quande Liu and Pheng-Ann Heng and Ben Glocker},
  journal={IEEE Transactions on Medical Imaging},
  year={2020},
  volume={39},
  pages={2415-2425}
}
Multi-modal learning is typically performed with network architectures containing modality-specific layers and shared layers, utilizing co-registered images of different modalities. We propose a novel learning scheme for unpaired cross-modality image segmentation, with a highly compact architecture achieving superior segmentation accuracy. In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI, and only employ modality-specific internal… 
Deep Class-Specific Affinity-Guided Convolutional Network for Multimodal Unpaired Image Segmentation
TLDR
This paper designs class-specific affinity matrices to encode the knowledge of hierarchical feature reasoning, together with the shared convolutional layers to ensure the cross-modality generalization, and proposes an affinity-guided fully Convolutional network for multimodal image segmentation.
Multi-Task, Multi-Domain Deep Segmentation with Shared Representations and Contrastive Regularization for Sparse Pediatric Datasets
TLDR
This work proposes to train a segmentation model on multiple datasets, arising from different parts of the anatomy, in a multi-task and multi-domain learning framework, to overcome the inherent scarcity of pediatric data while benefiting from a more robust shared representation.
Toward Unpaired Multi-modal Medical Image Segmentation via Learning Structured Semantic Consistency
TLDR
A novel scheme to achieve better pixel-level segmentation for unpaired multi-modal medical images by using a single Transformer with a carefully designed External Attention Module to learn the structured semantic consistency between modalities in the training phase.
Domain Knowledge Driven Multi-modal Segmentation of Anatomical Brain Barriers to Cancer Spread
TLDR
A multi-modal segmentation method largely driven by domain knowledge is explored, which applies 3D U-Net as the backbone model and employs a label merging strategy for the symmetrical structures which have both left and right labels, to highlight the structural information regardless of the locations.
Generalizable multi-task, multi-domain deep segmentation of sparse pediatric imaging datasets via multi-scale contrastive regularization and multi-joint anatomical priors
TLDR
A novel multi-task, multi-domain learning framework is proposed in which a single segmentation network is optimized over the union of multiple datasets arising from distinct parts of the anatomy, to overcome the inherent scarcity of pediatric data while leveraging shared features between imaging datasets.
Structure-Driven Unsupervised Domain Adaptation for Cross-Modality Cardiac Segmentation
TLDR
This paper presents a novel unsupervised domain adaptation framework for cross-modality cardiac segmentation, by explicitly capturing a common cardiac structure embedded across different modalities to guide cardiac segmentsation.
Weakly supervised segmentation with cross-modality equivariant constraints
Shape-aware Meta-learning for Generalizing Prostate MRI Segmentation to Unseen Domains
TLDR
A novel shape-aware meta-learning scheme is presented to improve the model generalization in prostate MRI segmentation, and outperforms many state-of-the-art generalization methods consistently across all six settings of unseen domains.
...
...

References

SHOWING 1-10 OF 41 REFERENCES
Multi-modal Learning from Unpaired Images: Application to Multi-organ Segmentation in CT and MRI
TLDR
Results demonstrate that information across modalities can in particular improve performance on varying structures such as the spleen, and show that multi-modal learning can improve overall accuracy over modality-specific training.
HyperDense-Net: A Hyper-Densely Connected CNN for Multi-Modal Image Segmentation
TLDR
HyperDenseNet is proposed, a 3-D fully convolutional neural network that extends the definition of dense connectivity to multi-modal segmentation problems and has total freedom to learn more complex combinations between the modalities, within and in-between all the levels of abstraction, which increases significantly the learning representation.
SynSeg-Net: Synthetic Segmentation Without Target Modality Ground Truth
TLDR
An end-to-end synthetic segmentation network (SynSeg-Net) to train a segmentationNetwork for a target imaging modality without having manual labels is proposed and achieved comparable performance to the traditional segmentationnetwork using target modality labels in certain scenarios.
Multi-label Whole Heart Segmentation Using CNNs and Anatomical Label Configurations
TLDR
Results on the MICCAI 2017 Multi-Modality Whole Heart Segmentation (MM-WHS) challenge show that the proposed architecture performs well on the provided CT and MRI training volumes, delivering in a three-fold cross validation an average Dice Similarity Coefficient over all heart substructures.
PnP-AdaNet: Plug-and-Play Adversarial Domain Adaptation Network at Unpaired Cross-Modality Cardiac Segmentation
TLDR
A plug-and-play adversarial domain adaptation network (PnP-AdaNet) for adapting segmentation networks between different modalities of medical images, e.g., MRI and CT, which outperforms many state-of-the-art unsupervised domain adaptation approaches on the same dataset.
Learning Cross-Modality Representations From Multi-Modal Images
TLDR
A shared autoencoder-like convolutional network that learns a common representation from multi-modal data is presented, and a form of feature normalization, a learning objective that minimizes cross-modality differences, and modality dropout are investigated, in which the network is trained with varying subsets of modalities.
VoxResNet: Deep voxelwise residual networks for brain segmentation from 3D MR images
V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation
TLDR
This work proposes an approach to 3D image segmentation based on a volumetric, fully convolutional, neural network, trained end-to-end on MRI volumes depicting prostate, and learns to predict segmentation for the whole volume at once.
Translating and Segmenting Multimodal Medical Volumes with Cycle- and Shape-Consistency Generative Adversarial Network
TLDR
This work proposes a generic cross-modality synthesis approach and shows that these goals can be achieved with an end-to-end 3D convolutional neural network (CNN) composed of mutually-beneficial generators and segmentors for image synthesis and segmentation tasks.
...
...