• Corpus ID: 244478137

One-shot Weakly-Supervised Segmentation in Medical Images

  title={One-shot Weakly-Supervised Segmentation in Medical Images},
  author={Wenhui Lei and Qi Su and Ran Gu and Na Wang and Xinglong Liu and Guotai Wang and Xiaofan Zhang and Shaoting Zhang},
Deep neural networks usually require accurate and a large number of annotations to achieve outstanding performance in medical image segmentation. One-shot segmentation and weakly-supervised learning are promising research directions that lower labeling effort by learning a new class from only one annotated image and utilizing coarse labels instead, respectively. Previous works usually fail to leverage the anatomical structure and suffer from class imbalance and low contrast problems. Hence, we… 

Figures and Tables from this paper



Self-Supervision with Superpixels: Training Few-shot Medical Image Segmentation without Annotation

A novel self-supervised FSS framework for medical images in order to eliminate the requirement for annotations during training, and superpixel-based pseudo-labels are generated to provide supervision.

Distilling effective supervision for robust medical image segmentation with noisy labels

This work proposes a novel framework to address segmenting with noisy labels by distilling effective supervision information from both pixel and image levels and presents an image-level robust learning method to accommodate more information as the complements to pixel-level learning.

Contrastive learning of global and local features for medical image segmentation with limited annotations

This work proposes novel contrasting strategies that leverage structural similarity across volumetric medical images and a local version of the contrastive loss to learn distinctive representations of local regions that are useful for per-pixel segmentation in the semi-supervised setting with limited annotations.

V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation

This work proposes an approach to 3D image segmentation based on a volumetric, fully convolutional, neural network, trained end-to-end on MRI volumes depicting prostate, and learns to predict segmentation for the whole volume at once.

Collaborative Learning of Semi-Supervised Segmentation and Classification for Medical Images

  • Yi ZhouXiaodong He L. Shao
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
This paper proposes a collaborative learning method to jointly improve the performance of disease grading and lesion segmentation by semi-supervised learning with an attention mechanism and achieves consistent improvements over state-of-the-art methods on three public datasets.

SAM: Self-Supervised Learning of Pixel-Wise Anatomical Embeddings in Radiological Images

Self-supervised Anatomical eMbedding (SAM) generates semantic embeddings for each image pixel that describes its anatomical location or body part, and a pixel-level contrastive learning framework is proposed that ensures both global and local anatomical information are encoded.

Contrastive Learning of Relative Position Regression for One-Shot Object Localization in 3D Medical Images

A novel contrastive learning method which embeds the anatomical structure by predicting the Relative Position Regression (RPR) between any two patches from the same volume, and an one-shot framework for organ and landmark localization in volumetric medical images.

Self-Supervised Learning for Cardiac MR Image Segmentation by Anatomical Position Prediction

This paper proposes a novel way for training a cardiac MR image segmentation network, in which features are learnt in a self-supervised manner by predicting anatomical positions, and demonstrates that this seemingly simple task provides a strong signal for feature learning.