• Corpus ID: 238215870

FINE Samples for Learning with Noisy Labels

@inproceedings{Kim2021FINESF,
  title={FINE Samples for Learning with Noisy Labels},
  author={Taehyeon Kim and Jongwoo Ko and Sangwook Cho and Jinhwan Choi and Se-Young Yun},
  year={2021}
}
Modern deep neural networks (DNNs) become weak when the datasets contain noisy (incorrect) class labels. Robust techniques in the presence of noisy labels can be categorized into two types: developing noise-robust functions or using noisecleansing methods by detecting the noisy data. Recently, noise-cleansing methods have been considered as the most competitive noisy-label learning algorithms. Despite their success, their noisy label detectors are often based on heuristics more than a theory… 

References

SHOWING 1-10 OF 52 REFERENCES
Robust Inference via Generative Classifiers for Handling Noisy Labels
TLDR
This work proposes a novel inference method, termed Robust Generative classifier (RoG), applicable to any discriminative neural classifier pre-trained on noisy datasets, and proves that RoG generalizes better than baselines under noisy labels.
Symmetric Cross Entropy for Robust Learning With Noisy Labels
TLDR
The proposed Symmetric cross entropy Learning (SL) approach simultaneously addresses both the under learning and overfitting problem of CE in the presence of noisy labels, and empirically shows that SL outperforms state-of-the-art methods.
Robust Loss Functions under Label Noise for Deep Neural Networks
TLDR
This paper provides some sufficient conditions on a loss function so that risk minimization under that loss function would be inherently tolerant to label noise for multiclass classification problems, and generalizes the existing results on noise-tolerant loss functions for binary classification.
Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise
TLDR
It is demonstrated that robustness to label noise up to severe strengths can be achieved by using a set of trusted data with clean labels, and a loss correction that utilizes trusted examples in a data-efficient manner to mitigate the effects of label noise on deep neural network classifiers is proposed.
Training Deep Neural Networks on Noisy Labels with Bootstrapping
TLDR
A generic way to handle noisy and incomplete labeling by augmenting the prediction objective with a notion of consistency is proposed, which considers a prediction consistent if the same prediction is made given similar percepts, where the notion of similarity is between deep network features computed from the input data.
L_DMI: A Novel Information-theoretic Loss Function for Training Deep Nets Robust to Label Noise
TLDR
A novel information-theoretic loss function, L_DMI, is proposed, which is the first loss function that is provably robust to instance-independent label noise, regardless of noise pattern, and it can be applied to any existing classification neural networks straightforwardly without any auxiliary information.
Coresets for Robust Training of Neural Networks against Noisy Labels
TLDR
The key idea behind the method is to select weighted subsets (coresets) of clean data points that provide an approximately low-rank Jacobian matrix and prove that gradient descent applied to the subsets do not overfit the noisy labels.
Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels
TLDR
A theoretically grounded set of noise-robust loss functions that can be seen as a generalization of MAE and CCE are presented and can be readily applied with any existing DNN architecture and algorithm, while yielding good performance in a wide range of noisy label scenarios.
A Topological Filter for Learning with Label Noise
TLDR
This paper proposes a new method for filtering label noise that focuses on the much richer spatial behavior of data in the latent representational space and proves that this topological approach is guaranteed to collect the clean data with high probability.
Learning from Noisy Large-Scale Datasets with Minimal Supervision
TLDR
An approach to effectively use millions of images with noisy annotations in conjunction with a small subset of cleanly-annotated images to learn powerful image representations and is particularly effective for a large number of classes with wide range of noise in annotations.
...
1
2
3
4
5
...