Learning to Segment Medical Images from Few-Shot Sparse Labels
@article{Gama2021LearningTS, title={Learning to Segment Medical Images from Few-Shot Sparse Labels}, author={Pedro H. T. Gama and Hugo Neves de Oliveira and Jefersson Alex dos Santos}, journal={2021 34th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)}, year={2021}, pages={89-96} }
In this paper, we propose a novel approach for few-shot semantic segmentation with sparse labeled images. We investigate the effectiveness of our method, which is based on the Model-Agnostic Meta-Learning (MAML) algorithm, in the medical scenario, where the use of sparse labeling and few-shot can alleviate the cost of producing new annotated datasets. Our method uses sparse labels in the meta-training and dense labels in the meta-test, thus making the model learn to predict dense labels from…
Figures and Tables from this paper
References
SHOWING 1-10 OF 28 REFERENCES
Embracing Imperfect Datasets: A Review of Deep Learning Solutions for Medical Image Segmentation
- Computer ScienceMedical Image Anal.
- 2020
Interactive Medical Image Segmentation Using Deep Learning With Image-Specific Fine Tuning
- Computer ScienceIEEE Transactions on Medical Imaging
- 2018
A novel deep learning-based interactive segmentation framework by incorporating CNNs into a bounding box and scribble-based segmentation pipeline and proposing a weighted loss function considering network and interaction-based uncertainty for the fine tuning is proposed.
Few-Shot Semantic Segmentation with Prototype Learning
- Computer ScienceBMVC
- 2018
A generalized framework for few-shot semantic segmentation with an alternative training scheme based on prototype learning and metric learning is proposed, which outperforms the baselines by a large margin and shows comparable performance for 1-way few- shot semantic segmentsation on PASCAL VOC 2012 dataset.
Few-Shot Segmentation Propagation with Guided Networks
- Computer ScienceArXiv
- 2018
This work addresses the problem of few-shot segmentation: given few image and few pixel supervision, segment any images accordingly, and proposes guided networks, which extract a latent task representation from any amount of supervision, and optimize the architecture end-to-end for fast, accurate few- shot segmentation.
PANet: Few-Shot Image Semantic Segmentation With Prototype Alignment
- Computer Science2019 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2019
This paper tackles the challenging few-shot segmentation problem from a metric learning perspective and presents PANet, a novel prototype alignment network to better utilize the information of the support set to better generalize to unseen object categories.
One-Shot Learning for Semantic Segmentation
- Computer ScienceBMVC
- 2017
This work trains a network that, given a small set of annotated images, produces parameters for a Fully Convolutional Network (FCN), and uses this FCN to perform dense pixel-level prediction on a test image for the new semantic class.
U-Net: Convolutional Networks for Biomedical Image Segmentation
- Computer ScienceMICCAI
- 2015
It is shown that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
SG-One: Similarity Guidance Network for One-Shot Semantic Segmentation
- Computer ScienceIEEE Transactions on Cybernetics
- 2020
This article proposes a simple yet effective similarity guidance network to tackle the one-shot (SG-One) segmentation problem, aiming at predicting the segmentation mask of a query image with the reference to one densely labeled support image of the same category.
From 3D to 2D: Transferring knowledge for rib segmentation in chest X-rays
- Computer SciencePattern Recognit. Lett.
- 2020
Attention-Based Multi-Context Guiding for Few-Shot Semantic Segmentation
- Computer ScienceAAAI
- 2019
An Attentionbased Multi-Context Guiding (A-MCG) network, which consists of three branches: the support branch, the query branch, and the feature fusion branch, which embraces a spatial attention along the fusion branch to highlight context information from several scales, enhancing self-supervision in one-shot learning.