Self-supervised 3D anatomy segmentation using self-distilled masked image transformer (SMIT)

@article{Jiang2022Selfsupervised3A,
  title={Self-supervised 3D anatomy segmentation using self-distilled masked image transformer (SMIT)},
  author={Jue Jiang and Neelam Tyagi and Kathryn R. Tringale and Christopher Crane and Harini Veeraraghavan},
  journal={Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention},
  year={2022},
  volume={13434},
  pages={
          556-566
        }
}
  • Jue JiangN. Tyagi H. Veeraraghavan
  • Published 20 May 2022
  • Computer Science, Medicine
  • Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
Vision transformers efficiently model long-range context and thus have demonstrated impressive accuracy gains in several image analysis tasks including segmentation. However, such methods need large labeled datasets for training, which is hard to obtain for medical image analysis. Self-supervised learning (SSL) has demonstrated success in medical image segmentation using convolutional networks. In this work, we developed a self-distillation learning with masked image modeling method to perform… 
2 Citations

Progressively refined deep joint registration segmentation (ProRSeg) of gastrointestinal organs at risk: Application to MRI and cone-beam CT

ProRSeg produced more accurate and consistent GI OARs segmentation and DIR of MRI and CBCTs compared to multiple methods and preliminary results indicates feasibility for OAR dose accumulation using ProRSeg.

Transforming medical imaging with Transformers? A comparative review of key properties, current progresses, and future perspectives

Transformer, one of the latest technological advances of deep learning, has gained prevalence in natural language processing or computer vision. Since medical imaging bear some resemblance to

References

SHOWING 1-10 OF 34 REFERENCES

3D Self-Supervised Methods for Medical Imaging

This work proposes 3D versions for five different self-supervised methods, in the form of proxy tasks, to facilitate neural network feature learning from unlabeled 3D images, aiming to reduce the required cost for expert annotation.

Contrastive learning of global and local features for medical image segmentation with limited annotations

This work proposes novel contrasting strategies that leverage structural similarity across volumetric medical images and a local version of the contrastive loss to learn distinctive representations of local regions that are useful for per-pixel segmentation in the semi-supervised setting with limited annotations.

Revisiting Rubik's Cube: Self-supervised Learning with Volume-wise Transformation for 3D Medical Image Segmentation

The experimental results show that the self-supervised learning method can significantly improve the accuracy of 3D deep learning networks on volumetric medical datasets without the use of extra data.

TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation

It is argued that Transformers can serve as strong encoders for medical image segmentation tasks, with the combination of U-Net to enhance finer details by recovering localized spatial information, and empirical results suggest that the Transformer-based architecture presents a better way to leverage self-attention compared with previous CNN-based self-Attention methods.

CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image Segmentation

A novel framework that efficiently bridges a Convolutional neural network and a Transformer (CoTr) for accurate 3D medical image segmentation and results indicate that the CoTr leads to a substantial performance improvement over other CNN-based, transformerbased, and hybrid methods on the 3D multi-organ segmentation task.

Medical Transformer: Universal Brain Encoder for 3D MRI Analysis

This work proposes a novel transfer learning framework, called Medical Transformer, that effectively models 3D volumetric images in the form of a sequence of 2D image slices, and outperforms the state-of-the-art transfer learning methods.

Models Genesis

Towards Cross-Modality Medical Image Segmentation with Online Mutual Knowledge Distillation

Experimental results on the public multi-class cardiac segmentation data, i.e., MM-WHS 2017, show that the method achieves large improvements on CT segmentation by utilizing additional MRI data and outperforms other state-of-the-art multi-modality learning methods.

UNETR: Transformers for 3D Medical Image Segmentation

This work reformulates the task of volumetric (3D) medical image segmentation as a sequence-to-sequence prediction problem and introduces a novel architecture, dubbed as UNEt TRansformers (UNETR), that utilizes a transformer as the encoder to learn sequence representations of the input volume and effectively capture the global multi-scale information.