Corpus ID: 67749977

Transfusion: Understanding Transfer Learning with Applications to Medical Imaging

@article{Raghu2019TransfusionUT,
  title={Transfusion: Understanding Transfer Learning with Applications to Medical Imaging},
  author={Maithra Raghu and Chiyuan Zhang and Jon M. Kleinberg and Samy Bengio},
  journal={ArXiv},
  year={2019},
  volume={abs/1902.07208}
}
With the increasingly varied applications of deep learning, transfer learning has emerged as a critically important technique. [...] Key Result This similarity is evidenced both through model learning dynamics and a transfusion experiment, which explores the convergence speed using a subset of pretrained weights.Expand
Training Deep Learning models with small datasets
TLDR
This article reviews, evaluates and compares the current state of the art techniques in training neural networks to elucidate which techniques work best for small datasets, and proposes a path forward for the improvement of model accuracy in medical imaging applications. Expand
Critical Assessment of Transfer Learning for Medical Image Segmentation with Fully Convolutional Neural Networks
TLDR
The experiments show that although transfer learning reduces the training time on the target task, the improvement in segmentation accuracy is highly task/data-dependent, and it is shown that quite accurate FCNs can be built by freezing the encoder section of the network at random values and only training the decoder section. Expand
Domain-Specific, Semi-Supervised Transfer Learning for Medical Imaging
TLDR
This work proposes a lightweight architecture that uses mixed asymmetric kernels (MAKNet) to reduce the number of parameters significantly and trains the proposed architecture using semi-supervised learning to provide pseudo-labels for a large medical dataset to assist with transfer learning. Expand
Measuring Domain Shift for Deep Learning in Histopathology
TLDR
This work focuses on the internal representation learned by trained convolutional neural networks, and shows how this can be used to formulate a novel measure – the representation shift – for quantifying the magnitude of model-specific domain shift. Expand
A Closer Look at Domain Shift for Deep Learning in Histopathology
TLDR
A novel measure for evaluating the distance between domains in the context of the learned representation of a particular model is presented, which can reveal how sensitive a model is to domain variations, and can be used to detect new data that a model will have problems generalizing to. Expand
Visualizing and interpreting feature reuse of pretrained CNNs for histopathology
TLDR
This paper takes the example of finetuning a pretrained convolutional network on a histopathology task and finds that texture measures appear discriminative after finetuned, as shown by accurate Regression Concept Vectors. Expand
A Systematic Benchmarking Analysis of Transfer Learning for Medical Image Analysis
TLDR
A systematic study on the transferability of models pre-trained on iNat2021, the most recent large-scale fine-grained dataset, and 14 top self-supervised ImageNet models on 7 diverse medical tasks in comparison with the supervised ImageNet model. Expand
Ensembling and transfer learning for multi-domain microscopy image segmentation
TLDR
It is shown that deep neural networks could efficiently segment microscopy images with the domain shift present, and that transfer learning from other medical tasks is an effective strategy to reduce the amount of required annotated data. Expand
Transfer learning for small-scale data classification using CNN filter replacement
Recently, object recognition using CNN is widespread. Still, medical images do not have a sufficient number of images because they require the doctor’s findings in the training dataset. On such aExpand
Cats, not CAT scans: a study of dataset similarity in transfer learning for 2D medical image classification
TLDR
It is found that ImageNet is the source leading to the highest performances, but also that larger datasets are not necessarily better, and common intuitions about similarity may be inaccurate, and therefore not sufficient to predict an appropriate source a priori. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 31 REFERENCES
Domain Adaptive Transfer Learning with Specialist Models
TLDR
It is found that more pre- Training data does not always help, and transfer performance depends on a judicious choice of pre-training data, and domain adaptive transfer learning is proposed, a simple and effective pre- training method using importance weights computed based on the target dataset. Expand
How transferable are features in deep neural networks?
TLDR
This paper quantifies the generality versus specificity of neurons in each layer of a deep convolutional neural network and reports a few surprising results, including that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset. Expand
Deep Image Prior
TLDR
It is shown that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, superresolution, and inpainting. Expand
Borrowing Treasures from the Wealthy: Deep Transfer Learning through Selective Joint Fine-Tuning
  • Weifeng Ge, Yizhou Yu
  • Computer Science
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
TLDR
This paper introduces a deep transfer learning scheme, called selective joint fine-tuning, for improving the performance of deep learning tasks with insufficient training data, and can improve the classification accuracy by 2% - 10% using a single model. Expand
Interleaved text/image Deep Mining on a large-scale radiology database
TLDR
The large-scale datasets of extracted key images and their categorization, embedded vector labels and sentence descriptions can be harnessed to alleviate the deep learning “data-hungry” obstacle in the medical domain. Expand
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. Expand
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
TLDR
Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Expand
Understanding the difficulty of training deep feedforward neural networks
TLDR
The objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future. Expand
Learning Multiple Layers of Features from Tiny Images
TLDR
It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network. Expand
Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning
TLDR
This work proposes a measure to estimate domain similarity via Earth Mover's Distance and demonstrates that transfer learning benefits from pre-training on a source domain that is similar to the target domain by this measure. Expand
...
1
2
3
4
...