Corpus ID: 236134074

DeepSMILE: Self-supervised heterogeneity-aware multiple instance learning for DNA damage response defect classification directly from H&E whole-slide images

@article{Schirris2021DeepSMILESH,
  title={DeepSMILE: Self-supervised heterogeneity-aware multiple instance learning for DNA damage response defect classification directly from H\&E whole-slide images},
  author={Yoni Schirris and Efstratios Gavves and Iris Nederlof and Hugo M. Horlings and Jonas Teuwen},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.09405}
}
We propose a Deep learning-based weak label learning method for analysing whole slide images (WSIs) of Hematoxylin and Eosin (H&E) stained tumorcells not requiring pixel-level or tile-level annotations using Self-supervised pre-training and heterogeneity-aware deep Multiple Instance LEarning (DeepSMILE). We apply DeepSMILE to the task of Homologous recombination deficiency (HRD) and microsatellite instability (MSI) prediction. We utilize contrastive self-supervised learning to pre-train a… Expand
WeakSTIL: Weak whole-slide image level stromal tumor infiltrating lymphocyte scores are all you need
We present WeakSTIL, an interpretable two-stage weak label deep learning pipeline for scoring the percentage of stromal tumor infiltrating lymphocytes (sTIL%) in H&E-stained whole-slide images (WSIs)Expand

References

SHOWING 1-10 OF 58 REFERENCES
Attention-based Deep Multiple Instance Learning
TLDR
This paper proposes a neural network-based permutation-invariant aggregation operator that corresponds to the attention mechanism that achieves comparable performance to the best MIL methods on benchmark MIL datasets and outperforms other methods on a MNIST-based MIL dataset and two real-life histopathology datasets without sacrificing interpretability. Expand
A Simple Framework for Contrastive Learning of Visual Representations
TLDR
It is shown that composition of data augmentations plays a critical role in defining effective predictive tasks, and introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. Expand
Extending Unsupervised Neural Image Compression With Supervised Multitask Learning
TLDR
The experimental results suggest that the representations learned by the MTL objective are: (1) highly specific, due to the supervised training signal, and (2) transferable, since the same features perform well across different tasks. Expand
Genomic and Molecular Landscape of DNA Damage Repair Deficiency across The Cancer Genome Atlas
TLDR
These frequent DDR gene alterations in many human cancers have functional consequences that may determine cancer progression and guide therapy and a new machine-learning-based classifier developed from gene expression data allowed to identify alterations that phenocopy deleterious TP53 mutations. Expand
Association of brca 1 / 2 defects with genomic scores predictive of dna damage repair deficiency among breast cancer subtypes
  • Breast Cancer Research
  • 2014
Association of brca1/2 defects with genomic scores predictive of dna damage repair deficiency among breast cancer
  • subtypes. Breast Cancer
  • 2014
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
TLDR
Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train. Expand
An annotation-free whole-slide training approach to pathological classification of lung cancer types using deep learning
TLDR
This work develops a method for training neural networks on entire WSIs using only slide-level diagnoses and demonstrates higher classification performance than multiple-instance learning as well as strong localization results for small lesions through class activation mapping. Expand
CheXtransfer: performance and parameter efficiency of ImageNet models for chest X-Ray interpretation
TLDR
This work compares the transfer performance and parameter efficiency of 16 popular convolutional architectures on a large chest X-ray dataset (CheXpert) and finds no relationship between ImageNet performance and CheXpert performance, and finds that, for models without pretraining, the choice of model family influences performance more than size within a family for medical imaging tasks. Expand
Data Efficient and Weakly Supervised Computational Pathology on Whole Slide Images
TLDR
The method, which is named clustering-constrained-attention multiple-instance learning (CLAM), uses attention-based learning to identify subregions of high diagnostic value to accurately classify whole slides and instance-level clustering over the identified representative regions to constrain and refine the feature space. Expand
...
1
2
3
4
5
...