Self-supervised Learning of Pixel-wise Anatomical Embeddings in Radiological Images

@article{Yan2022SelfsupervisedLO,
  title={Self-supervised Learning of Pixel-wise Anatomical Embeddings in Radiological Images},
  author={Ke Yan and Jinzheng Cai and Dakai Jin and Shun Miao and Adam P. Harrison and Dazhou Guo and Youbao Tang and Jing Xiao and Jingjing Lu and Le Lu},
  journal={IEEE transactions on medical imaging},
  year={2022},
  volume={PP}
}
  • K. Yan, Jinzheng Cai, Le Lu
  • Published 4 December 2020
  • Computer Science
  • IEEE transactions on medical imaging
Radiological images such as computed tomography (CT) and X-rays render anatomy with intrinsic structures. Being able to reliably locate the same anatomical structure across varying images is a fundamental task in medical image analysis. In principle it is possible to use landmark detection or semantic segmentation for this task, but to work well these require large numbers of labeled data for each anatomical structure and sub-structure of interest. A more universal approach would learn the… 
SAME: Deformable Image Registration Based on Self-supervised Anatomical Embeddings
TLDR
This work introduces a fast and accurate method for unsupervised 3D medical image registration, named SAM-enhanced registration (SAME), which breaks down image registration into three steps: affine transformation, coarse deformation, and deep deformable registration.
Interpretable Medical Image Classification with Self-Supervised Anatomical Embedding and Prior Knowledge
TLDR
This work adopts a latest algorithm called self-supervised anatomical embedding (SAM) to locate point of interest (POI) on computed tomography (CT) scans and it outperforms an existing deep learning based method trained on the whole image.
One-shot Weakly-Supervised Segmentation in Medical Images
TLDR
An innovative framework for 3D medical image segmentation with one-shot and weakly-supervised settings is presented and shows significant improvement over the state-of-the-art methods and performs robustly even under severe class imbalance and low contrast.
Weakly-Supervised Universal Lesion Segmentation with Regional Level Set Loss
TLDR
This paper presents a novel weakly-supervised universal lesion segmentation method by building an attention enhanced model based on the High-Resolution Network (HRNet), named AHRNet, and proposes a regional level set (RLS) loss for optimizing lesion boundary delineation.
Exemplar Learning for Medical Image Segmentation
TLDR
An Exemplar Learning-based Synthesis Net (ELSNet) framework for medical image segmentation that enables innovative exemplar-based data synthesis, pixel-prototype based contrastive embedding learning, and pseudo-label based exploitation of the unlabeled data is proposed.
A Review of Self-supervised Learning Methods in the Field of Medical Image Analysis
  • Jiashu Xu
  • Computer Science
    International Journal of Image, Graphics and Signal Processing
  • 2021
TLDR
This article provides the latest and most detailed overview of self-supervisedLearning in the medical field and promotes the development of unsupervised learning in the field of medical imaging with three categories: context-based, generation- based, and contrast-based.
Lesion Segmentation and RECIST Diameter Prediction via Click-driven Attention and Dual-path Connection
TLDR
A prior-guided dual-path network (PDNet) to segment common types of lesions throughout the whole body and predict their RECIST diameters accurately and automatically is presented.
Contrastive Learning of Single-Cell Phenotypic Representations for Treatment Classification
TLDR
This work leverages a contrastive learning framework to learn appropriate representations from single-cell fluorescent microscopy images for the task of Mechanism-of-Action classification and concludes that one can learn robust cell representations with Contrastive learning.
Automatic Cancer Cell Taxonomy Using an Ensemble of Deep Neural Networks
TLDR
A deep-learning-based framework to classify cervical, hepatocellular, breast, and lung cancer cells is built and it is found that the using data augmentation and updating all the weights of a network during fine-tuning improve the overall performance of individual convolutional neural network models.
Self-Supervised Pre-Training of Swin Transformers for 3D Medical Image Analysis
Vision Transformers (ViT)s have shown great performance 13 abdominal organs and segmentation tasks from the Medical Segmentation Decathlon (MSD) dataset. Our model is currently the state-of-the-art

References

SHOWING 1-10 OF 82 REFERENCES
SAME: Deformable Image Registration Based on Self-supervised Anatomical Embeddings
TLDR
This work introduces a fast and accurate method for unsupervised 3D medical image registration, named SAM-enhanced registration (SAME), which breaks down image registration into three steps: affine transformation, coarse deformation, and deep deformable registration.
Interpretable Medical Image Classification with Self-Supervised Anatomical Embedding and Prior Knowledge
TLDR
This work adopts a latest algorithm called self-supervised anatomical embedding (SAM) to locate point of interest (POI) on computed tomography (CT) scans and it outperforms an existing deep learning based method trained on the whole image.
Contrastive learning of global and local features for medical image segmentation with limited annotations
TLDR
This work proposes novel contrasting strategies that leverage structural similarity across volumetric medical images and a local version of the contrastive loss to learn distinctive representations of local regions that are useful for per-pixel segmentation in the semi-supervised setting with limited annotations.
How to Learn from Unlabeled Volume Data: Self-supervised 3D Context Feature Learning
TLDR
This work proposes a new approach to train effective convolutional feature extractors based on a new concept of image-intrinsic spatial offset relations with an auxiliary heatmap regression loss that successfully capture semantic, anatomical information and enable state-of-the-art accuracy for a k-NN based one-shot segmentation task without any subsequent fine-tuning.
LT-Net: Label Transfer by Learning Reversible Voxel-Wise Correspondence for One-Shot Medical Image Segmentation
TLDR
A one-shot segmentation method to alleviate the burden of manual annotation for medical images by resorting to the forward-backward consistency, which is widely used in correspondence problems, and additionally learns the backward correspondences from the warped atlases back to the original atlas.
Anatomy-specific classification of medical images using deep convolutional nets
TLDR
It is demonstrated that deep learning can be used to train very reliable and accurate classifiers that could initialize further computer-aided diagnosis and a data augmentation approach can help to enrich the data set and improve classification performance.
Structured Landmark Detection via Topology-Adapting Deep Graph Learning
TLDR
A new topology-adapting deep graph learning approach for accurate anatomical facial and medical landmark detection and quantitative results comparing with the previous state-of-the-art approaches indicating the superior performance in both robustness and accuracy.
VoxelMorph: A Learning Framework for Deformable Medical Image Registration
TLDR
VoxelMorph promises to speed up medical image analysis and processing pipelines while facilitating novel directions in learning-based registration and its applications and demonstrates that the unsupervised model’s accuracy is comparable to the state-of-the-art methods while operating orders of magnitude faster.
Multi-Scale Deep Reinforcement Learning for Real-Time 3D-Landmark Detection in CT Scans
TLDR
This work couple the modeling of the anatomy appearance and the object search in a unified behavioral framework, using the capabilities of deep reinforcement learning and multi-scale image analysis, and significantly outperforms state-of-the-art solutions on detecting several anatomical structures with no failed cases from a clinical acceptance perspective.
Unsupervised body part regression via spatially self-ordering convolutional neural networks
  • Ke Yan, Le Lu, R. Summers
  • Computer Science
    2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018)
  • 2018
TLDR
A convolutional neural network (CNN) based unsupervised body part regression (UBR) algorithm to address the problem of automatic body part recognition for CT slices and two inter-sample CNN loss functions are presented.
...
...