• Corpus ID: 233392710

Beyond pixel-wise supervision for segmentation: A few global shape descriptors might be surprisingly good!

  title={Beyond pixel-wise supervision for segmentation: A few global shape descriptors might be surprisingly good!},
  author={Hoel Kervadec and Houda Bahig and Laurent L{\'e}tourneau-Guillon and Jos{\'e} Dolz and Ismail Ben Ayed},
Standard losses for training deep segmentation networks could be seen as individual classifications of pixels, instead of supervising the global shape of the predicted segmentations. While effective, they require exact knowledge of the label of each pixel in an image. This study investigates how effective global geometric shape descriptors could be, when used on their own as segmentation losses for training deep networks. Not only interesting theoretically, there exist deeper motivations to… 

Figures and Tables from this paper

Differentiable Boundary Point Extraction for Weakly Supervised Star-shaped Object Segmentation

This study proposes to extract boundary points from a star-shaped segmentation in a differentiable manner, which allows reducing annotation burden as instead of the pixel-wise segmentation only the two annotated points required for diameter measurement are used for training the model.

Test-Time Adaptation with Shape Moments for Image Segmentation

This work in-vestigate test-time single-subject adaptation for segmentation, and proposes a Shape-guided Entropy Minimization objective for tackling this task, which exhibits substantially better performances than the existing test- time adaptation methods.

The hidden label-marginal biases of segmentation losses

This work provides a theoretical analysis, which shows that CE and Dice share a much deeper connection than previously thought, and proposes a principled and simple solution, which enables to control explicitly the label-marginal bias.



Anatomically Constrained Neural Networks (ACNNs): Application to Cardiac Image Enhancement and Segmentation

This work proposes a generic training strategy that incorporates anatomical prior knowledge into CNNs through a new regularisation model, which is trained end-to-end and demonstrates how the learnt deep models of 3-D shapes can be interpreted and used as biomarkers for classification of cardiac pathologies.

What's the Point: Semantic Segmentation with Point Supervision

This work takes a natural step from image-level annotation towards stronger supervision: it asks annotators to point to an object if one exists, and incorporates this point supervision along with a novel objectness potential in the training loss function of a CNN model.

DeepCut: Object Segmentation From Bounding Box Annotations Using Convolutional Neural Networks

This paper proposes a method to obtain pixelwise object segmentations given an image dataset labelled weak annotations, in this case bounding boxes, and test its applicability to solve brain and lung segmentation problems on a challenging fetal magnetic resonance dataset.

DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs

This work addresses the task of semantic image segmentation with Deep Learning and proposes atrous spatial pyramid pooling (ASPP), which is proposed to robustly segment objects at multiple scales, and improves the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models.

Weakly Supervised Deep Nuclei Segmentation using Points Annotation in Histopathology Images

Experimental results reveal that the proposed method is able to achieve competitive performance compared to the fully supervised counterpart and the state-of-the art methods while requiring significantly less annotation effort.

Weakly-and Semi-Supervised Learning of a Deep Convolutional Network for Semantic Image Segmentation

Expectation-Maximization (EM) methods for semantic image segmentation model training under weakly supervised and semi-supervised settings are developed and extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentsation benchmark, while requiring significantly less annotation effort.

Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations

This work investigates the behavior of these loss functions and their sensitivity to learning rate tuning in the presence of different rates of label imbalance across 2D and 3D segmentation tasks and proposes to use the class re-balancing properties of the Generalized Dice overlap as a robust and accurate deep-learning loss function for unbalanced tasks.

Boundary loss for highly unbalanced segmentation

V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation

This work proposes an approach to 3D image segmentation based on a volumetric, fully convolutional, neural network, trained end-to-end on MRI volumes depicting prostate, and learns to predict segmentation for the whole volume at once.

U-Net: Convolutional Networks for Biomedical Image Segmentation

It is shown that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.