Learning with Less Labels in Digital Pathology Via Scribble Supervision from Natural Images

  title={Learning with Less Labels in Digital Pathology Via Scribble Supervision from Natural Images},
  author={Eu Wern Teh and Graham W. Taylor},
  journal={2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)},
  • Eu Wern TehGraham W. Taylor
  • Published 7 January 2022
  • Computer Science
  • 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)
A critical challenge of training deep learning models in the Digital Pathology (DP) domain is the high annotation cost by medical experts. One way to tackle this issue is via transfer learning from the natural image domain (NI), where the annotation cost is considerably cheaper. Cross-domain transfer learning from NI to DP is shown to be successful via class labels [1]. One potential weakness of relying on class labels is the lack of spatial information, which can be obtained from spatial… 

Figures and Tables from this paper



Learning with Less Data Via Weakly Labeled Patch Classification in Digital Pathology

It is shown that features learned from such weakly labeled datasets are indeed transferable and allow us to achieve highly competitive patch classification results on the colorectal cancer dataset and the PatchCamelyon (PCam) dataset while using an order of magnitude less labeled data.

Normalized Cut Loss for Weakly-Supervised CNN Segmentation

This work proposes a new principled loss function evaluating network output with criteria standard in "shallow" segmentation, e.g. normalized cut which evaluates only seeds where labels are known while normalized cut softly evaluates consistency of all pixels.

Deep neural network models for computational histopathology: A survey

On Regularized Losses for Weakly-supervised CNN Segmentation

This approach simplifies weakly-supervised training by avoiding extra MRF/CRF inference steps or layers explicitly generating full masks, while improving both the quality and efficiency of training.

ScribbleSup: Scribble-Supervised Convolutional Networks for Semantic Segmentation

This paper proposes to use scribbles to annotate images, and develops an algorithm to train convolutional networks for semantic segmentation supervised by scribbles, which shows excellent results on the PASCALCONTEXT dataset thanks to extra inexpensive scribble annotations.

DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs

This work addresses the task of semantic image segmentation with Deep Learning and proposes atrous spatial pyramid pooling (ASPP), which is proposed to robustly segment objects at multiple scales, and improves the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models.

Multi-class texture analysis in colorectal cancer histology

A new dataset of 5,000 histological images of human colorectal cancer including eight different types of tissue is presented and an optimal classification strategy is found that markedly outperformed traditional methods, improving the state of the art for tumour-stroma separation and setting a new standard for multiclass tissue separation.

What makes ImageNet good for transfer learning?

The overall findings suggest that most changes in the choice of pre-training data long thought to be critical do not significantly affect transfer performance.

Exploring the Limits of Weakly Supervised Pretraining

This paper presents a unique study of transfer learning with large convolutional networks trained to predict hashtags on billions of social media images and shows improvements on several image classification and object detection tasks, and reports the highest ImageNet-1k single-crop, top-1 accuracy to date.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.