Learning from Partially Overlapping Labels: Image Segmentation under Annotation Shift

  title={Learning from Partially Overlapping Labels: Image Segmentation under Annotation Shift},
  author={Gregory Filbrandt and Konstantinos Kamnitsas and David Bernstein and Alexandra Taylor and Ben Glocker},
Scarcity of high quality annotated images remains a limiting factor for training accurate image segmentation models. While more and more annotated datasets become publicly available, the number of samples in each individual database is often small. Combining different databases to create larger amounts of training data is appealing yet challenging due to the heterogeneity as a result of differences in data acquisition and annotation processes, often yielding incompatible or even conflicting… 

Figures and Tables from this paper


Unpaired Multi-Modal Segmentation via Knowledge Distillation
This work proposes a novel learning scheme for unpaired cross-modality image segmentation, with a highly compact architecture achieving superior segmentation accuracy, and introduces a novel loss term inspired by knowledge distillation.
Joint Learning of Brain Lesion and Anatomy Segmentation from Heterogeneous Datasets
This work focuses on training a single CNN model to predict brain tissue and lesion segmentations using heterogeneous datasets labeled independently, according to only one of these tasks (a common scenario when using publicly available datasets), and proposes a novel adaptive cross entropy (ACE) loss function that makes such training possible.
Unsupervised domain adaptation in brain lesion segmentation with adversarial networks
This work investigates unsupervised domain adaptation using adversarial neural networks to train a segmentation method which is more robust to differences in the input data, and which does not require any annotations on the test domain.
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)
The set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences are reported, finding that different algorithms worked best for different sub-regions, but that no single algorithm ranked in the top for all sub-Regions simultaneously.
NeuroNet: Fast and Robust Reproduction of Multiple Brain Image Segmentation Pipelines
The network is trained on 5,000 T1-weighted brain MRI scans from the UK Biobank Imaging Study that have been automatically segmented into brain tissue and cortical and sub-cortical structures using the standard neuroimaging pipelines and demonstrates very good reproducibility of the original outputs while increasing robustness to variations in the input data.
Deep Learning for Multi-Task Medical Image Segmentation in Multiple Modalities
This paper investigates whether a single convolutional neural network (CNN) can be trained to perform different segmentation tasks in medical images.
Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation
An efficient and effective dense training scheme which joins the processing of adjacent image patches into one pass through the network while automatically adapting to the inherent class imbalance present in the data, and improves on the state-of-the‐art for all three applications.
The Liver Tumor Segmentation Benchmark (LiTS)
In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LITS) organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2016 and
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks, but it becomes unwieldy when learning large datasets, so Mean Teacher, a method that averages model weights instead of label predictions, is proposed.
Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks
This simple and efficient method of semi-supervised learning for deep neural networks is proposed, trained in a supervised fashion with labeled and unlabeled data simultaneously and favors a low-density separation between classes.