• Corpus ID: 166227922

Combating Label Noise in Deep Learning Using Abstention

  title={Combating Label Noise in Deep Learning Using Abstention},
  author={Sunil Thulasidasan and Tanmoy Bhattacharya and Jeff A. Bilmes and Gopinath Chennupati and Jamaludin Mohd-Yusof},
  booktitle={International Conference on Machine Learning},
We introduce a novel method to combat label noise when training deep neural networks for classification. [] Key Method In the case of unstructured (arbitrary) label noise, abstention during training enables the DAC to be used as an effective data cleaner by identifying samples that are likely to have label noise. We provide analytical results on the loss function behavior that enable dynamic adaption of abstention rates based on learning progress during training. We demonstrate the utility of the deep…

Figures and Tables from this paper

Multi-Objective Interpolation Training for Robustness to Label Noise

A novel label noise detection method that exploits the robust feature representations learned via contrastive learning to estimate per-sample soft-labels whose disagreements with the original labels accurately identify noisy samples.

A Survey of Label-noise Representation Learning: Past, Present and Future

A formal definition of Label-Noise Representation Learning is clarified from the perspective of machine learning and the reason why noisy labels affect deep models' performance is figured out.

PNP: Robust Learning from Noisy Labels by Probabilistic Noise Prediction

This work proposes a simple yet effective approach named PNP (Probabilistic Noise Prediction) to explicitly model label noise by predicting label noise probabilistically, and identifies noisy samples and adopt dedicated optimization objectives accordingly.

SELF: Learning to Filter Noisy Labels with Self-Ensembling

This work presents a simple and effective method self-ensemble label filtering (SELF) to progressively filter out the wrong labels during training that substantially outperforms all previous works on noise-aware learning across different datasets and can be applied to a broad set of network architectures.

Confidence Adaptive Regularization for Deep Learning with Noisy Labels

This work introduces an indicator branch to the original model and enables the model to produce a confidence value for each sample, and proposes an auxiliary regularization term to further improve the robustness of the model.

MixNN: Combating Noisy Labels in Deep Learning by Mixing with Nearest Neighbors

  • Yangdi LuWenbo He
  • Computer Science
    2021 IEEE International Conference on Big Data (Big Data)
  • 2021
This work proposes MixNN, a novel framework to mitigate the influence of noisy labels by modifying the mislabeled samples by mixing them with their nearest neighbors through a weighted sum approach, and proposes a strategy to estimate the soft targets by gradually correcting the noisy labels.

Learning to Combat Noisy Labels via Classification Margins

MARVEL (MARgins Via Early Learning), where the goodness of “fit” for every instance is tracked by maintaining an epoch-history of its classification margins, where MARVEL outperforms other baselines consistently across different noise levels, with a significantly larger margin under asymmetric noise.

Scalable Penalized Regression for Noise Detection in Learning with Noisy Labels

This paper proposes using a theoretically guaranteed noisy label detection framework to detect and remove noisy data for Learning with Noisy Labels (LNL), and designs a penalized regression to model the linear relation between network features and one-hot labels.

SELC: Self-Ensemble Label Correction Improves Learning with Noisy Labels

This work shows that training with SELC refines the model by gradually reducing supervision from noisy labels and increasing supervision from ensemble predictions, and obtains more promising and stable results in the presence of class-conditional, instance-dependent, and real-world label noise.

A Survey on Deep Learning with Noisy Labels: How to train your model when you cannot trust on the annotations?

  • F. CordeiroG. Carneiro
  • Computer Science
    2020 33rd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)
  • 2020
A survey on the main techniques in literature to improve the training of deep learning models in the presence of noisy labels is presented, in which the algorithm is classified in the following groups: robust losses, sample weighting, sample selection, meta-learning, and combined approaches.



Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise

It is demonstrated that robustness to label noise up to severe strengths can be achieved by using a set of trusted data with clean labels, and a loss correction that utilizes trusted examples in a data-efficient manner to mitigate the effects of label noise on deep neural network classifiers is proposed.

Training Deep Neural Networks on Noisy Labels with Bootstrapping

A generic way to handle noisy and incomplete labeling by augmenting the prediction objective with a notion of consistency is proposed, which considers a prediction consistent if the same prediction is made given similar percepts, where the notion of similarity is between deep network features computed from the input data.

Learning from Noisy Labels with Distillation

This work proposes a unified distillation framework to use “side” information, including a small clean dataset and label relations in knowledge graph, to “hedge the risk” of learning from noisy labels, and proposes a suite of new benchmark datasets to evaluate this task in Sports, Species and Artifacts domains.

Deep Learning is Robust to Massive Label Noise

It is shown that deep neural networks are capable of generalizing from training data for which true labels are massively outnumbered by incorrect labels, and that training in this regime requires a significant but manageable increase in dataset size that is related to the factor by which correct labels have been diluted.

Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels

A theoretically grounded set of noise-robust loss functions that can be seen as a generalization of MAE and CCE are presented and can be readily applied with any existing DNN architecture and algorithm, while yielding good performance in a wide range of noisy label scenarios.

Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach

It is proved that, when ReLU is the only non-linearity, the loss curvature is immune to class-dependent label noise, and it is shown how one can estimate these probabilities, adapting a recent technique for noise estimation to the multi-class setting, and providing an end-to-end framework.

MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels

Experimental results demonstrate that the proposed novel technique of learning another neural network, called MentorNet, to supervise the training of the base deep networks, namely, StudentNet, can significantly improve the generalization performance of deep networks trained on corrupted training data.

Toward Robustness against Label Noise in Training Deep Discriminative Neural Networks

The proposed novel framework for training deep convolutional neural networks from noisy labeled datasets that can be obtained cheaply is applied to the image labeling problem and is shown to be effective in labeling unseen images as well as reducing label noise in training on CIFAR-10 and MS COCO datasets.

Co-teaching: Robust training of deep neural networks with extremely noisy labels

Empirical results on noisy versions of MNIST, CIFar-10 and CIFAR-100 demonstrate that Co-teaching is much superior to the state-of-the-art methods in the robustness of trained deep models.

Learning and Evaluation in Presence of Non-i.i.d. Label Noise

This paper presents a novel methodology for learning and evaluation in presence of systematic label noise, the core of which is a novel extension of support vector data description / one-class SVM that can incorporate latent variables.