Corpus ID: 211132676

Learning Not to Learn in the Presence of Noisy Labels

@article{Ziyin2020LearningNT,
  title={Learning Not to Learn in the Presence of Noisy Labels},
  author={Liu Ziyin and Blair Chen and Ru Wang and Paul Pu Liang and R. Salakhutdinov and Louis-Philippe Morency and Masahito Ueda},
  journal={ArXiv},
  year={2020},
  volume={abs/2002.06541}
}
Learning in the presence of label noise is a challenging yet important task: it is crucial to design models that are robust in the presence of mislabeled datasets. In this paper, we discover that a new class of loss functions called the gambler's loss provides strong robustness to label noise across various levels of corruption. We show that training with this loss function encourages the model to "abstain" from learning on the data points with noisy labels, resulting in a simple and effective… Expand
A Survey on Deep Learning with Noisy Labels: How to train your model when you cannot trust on the annotations?
  • F. Cordeiro, G. Carneiro
  • Computer Science
  • 2020 33rd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)
  • 2020
An Investigation of how Label Smoothing Affects Generalization
Distributional Generalization: A New Kind of Generalization
Classification Under Human Assistance
Volumization as a Natural Generalization of Weight Decay
Model-Agnostic Meta-Learning for EEG Motor Imagery Decoding in Brain-Computer-Interfacing
  • Denghao Li, Pablo Ortega, Xia Wei, A. Faisal
  • Computer Science, Engineering
  • 2021 10th International IEEE/EMBS Conference on Neural Engineering (NER)
  • 2021
Differentiable Learning Under Triage
Learning to Combat Noisy Labels via Classification Margins

References

SHOWING 1-10 OF 34 REFERENCES
How does Disagreement Help Generalization against Label Corruption?
Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels
Avoiding Your Teacher's Mistakes: Training Neural Networks with Controlled Weak Supervision
Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach
Understanding Generalization of Deep Neural Networks Trained with Noisy Labels
Understanding deep learning requires rethinking generalization
SGD on Neural Networks Learns Functions of Increasing Complexity
Adam: A Method for Stochastic Optimization
...
1
2
3
4
...