Corpus ID: 236155050

kNet: A Deep kNN Network To Handle Label Noise

@article{Mizrahi2021kNetAD,
  title={kNet: A Deep kNN Network To Handle Label Noise},
  author={Itzik Mizrahi and Shai Avidan},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.09735}
}
Deep Neural Networks require large amounts of labeled data for their training. Collecting this data at scale inevitably causes label noise. Hence, the need to develop learning algorithms that are robust to label noise. In recent years, k Nearest Neighbors (kNN) emerged as a viable solution to this problem. Despite its success, kNN is not without its problems. Mainly, it requires a huge memory footprint to store all the training samples and it needs an advanced data structure to allow for fast… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 34 REFERENCES
Learning from Noisy Labels with Deep Neural Networks
TLDR
A novel way of modifying deep learning models so they can be effectively trained on data with high level of label noise is proposed, and it is shown that random images without labels can improve the classification performance. Expand
Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach
TLDR
It is proved that, when ReLU is the only non-linearity, the loss curvature is immune to class-dependent label noise, and it is shown how one can estimate these probabilities, adapting a recent technique for noise estimation to the multi-class setting, and providing an end-to-end framework. Expand
Training deep neural-networks using a noise adaptation layer
TLDR
This study presents a neural-network approach that optimizes the same likelihood function as optimized by the EM algorithm but extended to the case where the noisy labels are dependent on the features in addition to the correct labels. Expand
Iterative Learning with Open-set Noisy Labels
TLDR
A novel iterative learning framework for training CNNs on datasets with open-set noisy labels that detects noisy labels and learns deep discriminative features in an iterative fashion and designs a Siamese network to encourage clean labels and noisy labels to be dissimilar. Expand
Attention-Aware Noisy Label Learning for Image Classification
TLDR
The attention-aware noisy label learning approach is proposed to improve the discriminative capability of the network trained on datasets with potential label noise and superior results over state-of-the-art methods validate the effectiveness of the proposed approach. Expand
Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels
TLDR
A theoretically grounded set of noise-robust loss functions that can be seen as a generalization of MAE and CCE are presented and can be readily applied with any existing DNN architecture and algorithm, while yielding good performance in a wide range of noisy label scenarios. Expand
When Does Label Smoothing Help?
TLDR
It is shown empirically that in addition to improving generalization, label smoothing improves model calibration which can significantly improve beam-search and that if a teacher network is trained with label smoothed, knowledge distillation into a student network is much less effective. Expand
Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise
TLDR
It is demonstrated that robustness to label noise up to severe strengths can be achieved by using a set of trusted data with clean labels, and a loss correction that utilizes trusted examples in a data-efficient manner to mitigate the effects of label noise on deep neural network classifiers is proposed. Expand
Curriculum Loss: Robust Learning and Generalization against Label Corruption
TLDR
This paper studies the 0-1 loss, which has a monotonic relationship with an empirical adversary (reweighted) risk, and proposes a very simple and efficient loss, i.e. curriculum loss (CL), which bridges a connection between curriculum learning and robust learning. Expand
Symmetric Cross Entropy for Robust Learning With Noisy Labels
TLDR
The proposed Symmetric cross entropy Learning (SL) approach simultaneously addresses both the under learning and overfitting problem of CE in the presence of noisy labels, and empirically shows that SL outperforms state-of-the-art methods. Expand
...
1
2
3
4
...