• Corpus ID: 235266140

Analysis of classifiers robust to noisy labels

@article{Diaz2021AnalysisOC,
  title={Analysis of classifiers robust to noisy labels},
  author={A. F. D'iaz and David Steele},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.00274}
}
We explore contemporary robust classification algorithms for overcoming classdependant labelling noise: Forward, Importance Re-weighting and T-revision. The classifiers are trained and evaluated on class-conditional random label noise data while the final test data is clean. We demonstrate methods for estimating the transition matrix in order to obtain better classifier performance when working with noisy data. We apply deep learning to three data-sets and derive an endto-end analysis with… 

References

SHOWING 1-10 OF 13 REFERENCES

Multiclass Learning With Partially Corrupted Labels

TLDR
This paper investigates the multiclass classification problem where a certain amount of training examples are randomly labeled and shows that this issue can be formulated as a label noise problem and employs the widely used importance reweighting strategy to enable the learning on noisy data to more closely reflect the results on noise-free data.

Classification with Noisy Labels by Importance Reweighting

  • Tongliang LiuD. Tao
  • Computer Science
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2016
TLDR
It is proved that any surrogate loss function can be used for classification with noisy labels by using importance reweighting, with consistency assurance that the label noise does not ultimately hinder the search for the optimal classifier of the noise-free sample.

Learning to Learn From Noisy Labeled Data

TLDR
This work proposes a noise-tolerant training algorithm, where a meta-learning update is performed prior to conventional gradient update, and trains the model such that after one gradient update using each set of synthetic noisy labels, the model does not overfit to the specific noise.

Learning from massive noisy labeled data for image classification

TLDR
A general framework to train CNNs with only a limited number of clean labels and millions of easily obtained noisy labels is introduced and the relationships between images, class labels and label noises are model with a probabilistic graphical model and further integrate it into an end-to-end deep learning system.

Training Convolutional Networks with Noisy Labels

TLDR
An extra noise layer is introduced into the network which adapts the network outputs to match the noisy label distribution and can be estimated as part of the training process and involve simple modifications to current training infrastructures for deep networks.

Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey

Learning from Noisy Labels with Deep Neural Networks

TLDR
A novel way of modifying deep learning models so they can be effectively trained on data with high level of label noise is proposed, and it is shown that random images without labels can improve the classification performance.

Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach

TLDR
It is proved that, when ReLU is the only non-linearity, the loss curvature is immune to class-dependent label noise, and it is shown how one can estimate these probabilities, adapting a recent technique for noise estimation to the multi-class setting, and providing an end-to-end framework.

Classification in the Presence of Label Noise: A Survey

TLDR
In this paper, label noise consists of mislabeled instances: no additional information is assumed to be available like e.g., confidences on labels.

Are Anchor Points Really Indispensable in Label-Noise Learning?

TLDR
Empirical results on benchmark-simulated and real-world label-noise datasets demonstrate that without using exact anchor points, the proposed method is superior to the state-of-the-art label- noise learning methods.