Learning to Segment Medical Images from Few-Shot Sparse Labels

  title={Learning to Segment Medical Images from Few-Shot Sparse Labels},
  author={Pedro H. T. Gama and Hugo Neves de Oliveira and Jefersson Alex dos Santos},
  journal={2021 34th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)},
In this paper, we propose a novel approach for few-shot semantic segmentation with sparse labeled images. We investigate the effectiveness of our method, which is based on the Model-Agnostic Meta-Learning (MAML) algorithm, in the medical scenario, where the use of sparse labeling and few-shot can alleviate the cost of producing new annotated datasets. Our method uses sparse labels in the meta-training and dense labels in the meta-test, thus making the model learn to predict dense labels from… 


Interactive Medical Image Segmentation Using Deep Learning With Image-Specific Fine Tuning
A novel deep learning-based interactive segmentation framework by incorporating CNNs into a bounding box and scribble-based segmentation pipeline and proposing a weighted loss function considering network and interaction-based uncertainty for the fine tuning is proposed.
Few-Shot Semantic Segmentation with Prototype Learning
A generalized framework for few-shot semantic segmentation with an alternative training scheme based on prototype learning and metric learning is proposed, which outperforms the baselines by a large margin and shows comparable performance for 1-way few- shot semantic segmentsation on PASCAL VOC 2012 dataset.
Few-Shot Segmentation Propagation with Guided Networks
This work addresses the problem of few-shot segmentation: given few image and few pixel supervision, segment any images accordingly, and proposes guided networks, which extract a latent task representation from any amount of supervision, and optimize the architecture end-to-end for fast, accurate few- shot segmentation.
PANet: Few-Shot Image Semantic Segmentation With Prototype Alignment
This paper tackles the challenging few-shot segmentation problem from a metric learning perspective and presents PANet, a novel prototype alignment network to better utilize the information of the support set to better generalize to unseen object categories.
One-Shot Learning for Semantic Segmentation
This work trains a network that, given a small set of annotated images, produces parameters for a Fully Convolutional Network (FCN), and uses this FCN to perform dense pixel-level prediction on a test image for the new semantic class.
U-Net: Convolutional Networks for Biomedical Image Segmentation
It is shown that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
SG-One: Similarity Guidance Network for One-Shot Semantic Segmentation
This article proposes a simple yet effective similarity guidance network to tackle the one-shot (SG-One) segmentation problem, aiming at predicting the segmentation mask of a query image with the reference to one densely labeled support image of the same category.
From 3D to 2D: Transferring knowledge for rib segmentation in chest X-rays
Attention-Based Multi-Context Guiding for Few-Shot Semantic Segmentation
An Attentionbased Multi-Context Guiding (A-MCG) network, which consists of three branches: the support branch, the query branch, and the feature fusion branch, which embraces a spatial attention along the fusion branch to highlight context information from several scales, enhancing self-supervision in one-shot learning.