Overfitting of neural nets under class imbalance: Analysis and improvements for segmentation

@inproceedings{Li2019OverfittingON,
  title={Overfitting of neural nets under class imbalance: Analysis and improvements for segmentation},
  author={Zeju Li and Konstantinos Kamnitsas and Ben Glocker},
  booktitle={MICCAI},
  year={2019}
}
Overfitting in deep learning has been the focus of a number of recent works, yet its exact impact on the behavior of neural networks is not well understood. [...] Key Method Based on our analysis, we derive asymmetric modifications of existing loss functions and regularizers including a large margin loss, focal loss, adversarial training and mixup, which specifically aim at reducing the shift observed when embedding unseen samples of the under-represented class. We study the case of binary segmentation of brain…Expand
Analyzing Overfitting Under Class Imbalance in Neural Networks for Image Segmentation
TLDR
New asymmetric variants of popular loss functions and regularization techniques including a large margin loss, focal loss, adversarial training, mixup and data augmentation, which are explicitly designed to counter logit shift of the under-represented classes are introduced.
Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss
TLDR
A theoretically-principled label-distribution-aware margin (LDAM) loss motivated by minimizing a margin-based generalization bound is proposed that replaces the standard cross-entropy objective during training and can be applied with prior strategies for training with class-imbalance such as re-weighting or re-sampling.
Constrained Optimization for Training Deep Neural Networks Under Class Imbalance
TLDR
It is argued that prediction accuracy should be improved by emphasizing reducing FPRs at high TPRs for problems where misclassification of the positive samples are associated with higher cost, and introduced a novel constraint that can be used with existing loss functions to enforce maximal area under the ROC curve (AUC).
Improve Unseen Domain Generalization via Enhanced Local Color Transformation
TLDR
This work considers a realistic problem of domain generalization in fundus image analysis: when a model is trained on a certain domain but tested on unseen domains, and introduces an easy-to-use method, named enhanced domain transformation (EDT), to improve the performance on both seen and unseen data.
Learn to Threshold: ThresholdNet With Confidence-Guided Manifold Mixup for Polyp Segmentation
TLDR
A novel ThresholdNet with a confidence-guided manifold mixup (CGMMix) data augmentation method, mainly for addressing the aforementioned issues in polyp segmentation, and is able to calibrate the segmentation result with the learned threshold map.
MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data
TLDR
The method, MetaBalance, is evaluated on image classification, credit-card fraud detection, loan default prediction, and facial recognition tasks with severely imbalanced data, and it is found that MetaBalance outperforms a wide array of popular re-sampling strategies.
Bayesian Sampling Bias Correction: Training with the Right Loss Function
TLDR
A family of loss functions to train models in the presence of sampling bias are derived, based on Bayesian risk minimization, for arbitrary likelihood models, exhibiting a direct connection to information gain.
Enhancing MR Image Segmentation with Realistic Adversarial Data Augmentation
TLDR
Results show that the proposed adversarial data augmentation approach can alleviate the need for labeled data while improving model generalization ability, indicating its practical value in medical imaging applications.
A Stacked Generalization U-shape network based on zoom strategy and its application in biomedical image segmentation
TLDR
The proposed SG-UNet is essentially a stacked generalization architecture consisting of multiple sub-modules, which takes multi-resolution images as input and uses hybrid features to segment regions of interest and detect diseases under the multi-supervision, and achieves higher accuracy with less computational complexity than other stacked ensemble networks for biomedical image segmentation.
Universal Loss Reweighting to Balance Lesion Size Inequality in 3D Medical Image Segmentation
TLDR
A loss reweighting approach to increase the ability of the network to detect small lesions and shows that inverse weighting considerably increases the detection quality, while preserves the delineation quality on a state-of-the-art level.
...
1
2
3
4
...

References

SHOWING 1-9 OF 9 REFERENCES
mixup: Beyond Empirical Risk Minimization
TLDR
This work proposes mixup, a simple learning principle that trains a neural network on convex combinations of pairs of examples and their labels, which improves the generalization of state-of-the-art neural network architectures.
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
Large-Margin Softmax Loss for Convolutional Neural Networks
TLDR
A generalized large-margin softmax (L-Softmax) loss which explicitly encourages intra-class compactness and inter-class separability between learned features and which not only can adjust the desired margin but also can avoid overfitting is proposed.
Focal Loss for Dense Object Detection
TLDR
This paper proposes to address the extreme foreground-background class imbalance encountered during training of dense detectors by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples, and develops a novel Focal Loss, which focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training.
Efficient multi‐scale 3D CNN with fully connected CRF for accurate brain lesion segmentation
TLDR
An efficient and effective dense training scheme which joins the processing of adjacent image patches into one pass through the network while automatically adapting to the inherent class imbalance present in the data, and improves on the state-of-the‐art for all three applications.
Small Organ Segmentation in Whole-body MRI using a Two-stage FCN and Weighting Schemes
TLDR
This work proposes a two-stage approach with weighting schemes based on auto-context and spatial atlas priors that can boost the segmentation accuracy of multiple small organs in whole-body MRI scans.
Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features
TLDR
This set of labels and features should enable direct utilization of the TCGA/TCIA glioma collections towards repeatable, reproducible and comparative quantitative studies leading to new predictive, prognostic, and diagnostic assessments, as well as performance evaluation of computer-aided segmentation methods.
miccai multi-atlas labeling beyond the cranial vault workshop and challenge
  • 2015
miccai multi-atlas labeling beyond the cranial vault workshop and challenge Overfitting of neural nets under class imbalance
  • 2015