• Corpus ID: 220042181

Bayesian Sampling Bias Correction: Training with the Right Loss Function

@article{Folgoc2020BayesianSB,
  title={Bayesian Sampling Bias Correction: Training with the Right Loss Function},
  author={Lo{\"i}c Le Folgoc and Vasileios Baltatzis and Amir Alansary and Sneha Desai and Anand Devaraj and Sam Ellis and Octavio Martinez Manzanera and Fahdi Kanavati and Arjun Nair and Jutta Schnabel and Ben Glocker},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.13798}
}
We derive a family of loss functions to train models in the presence of sampling bias. Examples are when the prevalence of a pathology differs from its sampling rate in the training dataset, or when a machine learning practioner rebalances their training dataset. Sampling bias causes large discrepancies between model performance in the lab and in more realistic settings. It is omnipresent in medical imaging applications, yet is often overlooked at training time or addressed on an ad-hoc basis… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 26 REFERENCES

Overfitting of neural nets under class imbalance: Analysis and improvements for segmentation

This study analyzes overfitting by examining how the distribution of logits alters in relation to how much the model overfits, and derives asymmetric modifications of existing loss functions and regularizers including a large margin loss, focal loss, adversarial training and mixup which specifically aim at reducing the shift observed when embedding unseen samples of the under-represented class.

Learning and evaluating classifiers under sample selection bias

This paper formalizes the sample selection bias problem in machine learning terms and study analytically and experimentally how a number of well-known classifier learning methods are affected by it.

Domain adaptation and sample bias correction theory and algorithm for regression

The Foundations of Cost-Sensitive Learning

It is argued that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods, and the recommended way of applying one of these methods is to learn a classifier from the training set and then to compute optimal decisions explicitly using the probability estimates given by the classifier.

Correcting Sample Selection Bias by Unlabeled Data

A nonparametric method which directly produces resampling weights without distribution estimation is presented, which works by matching distributions between training and testing sets in feature space.

A Survey on Transfer Learning

The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed.

Sample Selection Bias Correction Theory

A theoretical analysis of sample selection bias correction based on the novel concept of distributional stability which generalizes the existing concept of point-based stability and can be used to analyze other importance weighting techniques and their effect on accuracy when using a distributionally stable algorithm.

Unsupervised domain adaptation in brain lesion segmentation with adversarial networks

This work investigates unsupervised domain adaptation using adversarial neural networks to train a segmentation method which is more robust to differences in the input data, and which does not require any annotations on the test domain.

Weight Uncertainty in Neural Networks

This work introduces a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop, and shows how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems.