Learning from Noisy Labels with Distillation

@article{Li2017LearningFN,
  title={Learning from Noisy Labels with Distillation},
  author={Yuncheng Li and Jianchao Yang and Yale Song and Liangliang Cao and Jiebo Luo and Li-Jia Li},
  journal={2017 IEEE International Conference on Computer Vision (ICCV)},
  year={2017},
  pages={1928-1936}
}
The ability of learning from noisy labels is very useful in many visual recognition tasks, as a vast amount of data with noisy labels are relatively easy to obtain. [] Key Result The empirical study demonstrates the effectiveness of our proposed method in all the domains.

Figures and Tables from this paper

Distilling Effective Supervision From Severe Label Noise
TLDR
This paper presents a holistic framework to train deep neural networks in a way that is highly invulnerable to label noise and achieves excellent performance on large-scale datasets with real-world label noise.
Label Distribution for Learning with Noisy Labels
TLDR
A novel method named Label Distribution based Confidence Estimation (LDCE) is proposed, which estimates the confidence of the observed labels based on label distribution and shows the boundary between clean labels and noisy labels becomes clear according to confidence scores.
Learning from Noisy Labels with Deep Neural Networks: A Survey
TLDR
A comprehensive review of 62 state-of-the-art robust training methods, all of which are categorized into five groups according to their methodological difference, followed by a systematic comparison of six properties used to evaluate their superiority.
Label Noise Types and Their Effects on Deep Learning
TLDR
A detailed analysis of the effects of different kinds of label noise on learning is provided, and a generic framework to generate feature-dependent label noise is proposed, which is shown to be the most challenging case for learning.
Robust Curriculum Learning: from clean label detection to noisy label self-correction
TLDR
This paper starts with learning from clean data and then gradually move to learn noisy-labeled data with pseudo labels produced by a time-ensemble of the model and data augmentations, resulting in more precise detection of both clean labels and correct pseudo labels.
Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels
TLDR
This paper proposes a framework called Class2Simi, which transforms data points with noisy class labels to data pairs with noisy similarity labels, where a similarity label denotes whether a pair shares the class label or not, and changes loss computation on top of model prediction into a pairwise manner.
Noisy Labels Can Induce Good Representations
TLDR
It is observed that if an architecture “suits” the task, training with noisy labels can induce useful hidden representations, even when the model generalizes poorly; i.e., the last few layers of the model are more negatively affected by noisy labels.
Learning to Learn From Noisy Labeled Data
TLDR
This work proposes a noise-tolerant training algorithm, where a meta-learning update is performed prior to conventional gradient update, and trains the model such that after one gradient update using each set of synthetic noisy labels, the model does not overfit to the specific noise.
Learning to Bootstrap for Combating Label Noise
TLDR
This paper proposes a more generic learnable loss objective which enables a joint reweighting of instances and labels at once, and dynamically adjusts the per-sample importance weight between the real observed labels and pseudo-labels, where the weights are efficiently determined in a meta process.
Deep Learning From Noisy Image Labels With Quality Embedding
TLDR
A probabilistic model is proposed, which explicitly introduces an extra variable to represent the trustworthiness of noisy labels, termed as the quality variable, which effectively minimizes the influence of label noise and outperforms the state-of-the-art deep learning approaches.
...
...

References

SHOWING 1-10 OF 26 REFERENCES
Learning from Noisy Labels with Deep Neural Networks
TLDR
A novel way of modifying deep learning models so they can be effectively trained on data with high level of label noise is proposed, and it is shown that random images without labels can improve the classification performance.
Classification with Noisy Labels by Importance Reweighting
  • Tongliang Liu, D. Tao
  • Computer Science
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2016
TLDR
It is proved that any surrogate loss function can be used for classification with noisy labels by using importance reweighting, with consistency assurance that the label noise does not ultimately hinder the search for the optimal classifier of the noise-free sample.
Learning with Noisy Labels
TLDR
The problem of binary classification in the presence of random classification noise is theoretically studied—the learner sees labels that have independently been flipped with some small probability, and methods used in practice such as biased SVM and weighted logistic regression are provably noise-tolerant.
Training Deep Neural Networks on Noisy Labels with Bootstrapping
TLDR
A generic way to handle noisy and incomplete labeling by augmenting the prediction objective with a notion of consistency is proposed, which considers a prediction consistent if the same prediction is made given similar percepts, where the notion of similarity is between deep network features computed from the input data.
Training Convolutional Networks with Noisy Labels
TLDR
An extra noise layer is introduced into the network which adapts the network outputs to match the noisy label distribution and can be estimated as part of the training process and involve simple modifications to current training infrastructures for deep networks.
Dropout distillation
TLDR
This work introduces a novel approach, coined "dropout distillation", that allows to train a predictor in a way to better approximate the intractable, but preferable, averaging process, while keeping under control its computational efficiency.
The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition
TLDR
This work introduces an alternative approach, leveraging free, noisy data from the web and simple, generic methods of recognition, and demonstrates its efficacy on four fine-grained datasets, greatly exceeding existing state of the art without the manual collection of even a single label.
ML-MG: Multi-label Learning with Missing Labels Using a Mixed Graph
TLDR
This work proposes a unified model of label dependencies by constructing a mixed graph, which jointly incorporates (i) instance-level similarity and class co-occurrence as undirected edges and (ii) semantic label hierarchy as directed edges.
You Lead, We Exceed: Labor-Free Video Concept Learning by Jointly Exploiting Web Videos and Images
TLDR
A Lead-Exceed Neural Network (LENN) is proposed, which reinforces the training on Web images and videos in a curriculum manner and can achieve 74.4% accuracy on UCFIOI dataset.
Learning Everything about Anything: Webly-Supervised Visual Concept Learning
TLDR
A fully-automated approach for learning extensive models for a wide range of variations within any concept, which leverages vast resources of online books to discover the vocabulary of variance, and intertwines the data collection and modeling steps to alleviate the need for explicit human supervision in training the models.
...
...