• Corpus ID: 12190952

Training deep neural-networks using a noise adaptation layer

@inproceedings{Goldberger2016TrainingDN,
  title={Training deep neural-networks using a noise adaptation layer},
  author={Jacob Goldberger and Ehud Ben-Reuven},
  booktitle={International Conference on Learning Representations},
  year={2016}
}
The availability of large datsets has enabled neural networks to achieve impressive recognition results. [] Key Method Thus we can apply the EM algorithm to find the parameters of both the network and the noise and to estimate the correct label. In this study we present a neural-network approach that optimizes the same likelihood function as optimized by the EM algorithm. The noise is explicitly modeled by an additional softmax layer that connects the correct labels to the noisy ones.

Figures from this paper

Deep Neural Networks for Corrupted Labels

An approach for learning deep networks from datasets corrupted by unknown label noise is described, which append a nonlinear noise model to a standard deep network, which is learned in tandem with the parameters of the network.

Understanding and Utilizing Deep Neural Networks Trained with Noisy Labels

This paper finds that the test accuracy can be quantitatively characterized in terms of the noise ratio in datasets, and adopts the Co-teaching strategy which takes full advantage of the identified samples to train DNNs robustly against noisy labels.

Learning from Noisy Labels with Noise Modeling Network

The state-of-the-art of training classifiers are extended by modeling noisy and missing labels in multi-label images with a new Noise Modeling Network (NMN) that follows the authors' convolutional neural network (CNN) and integrates with it, forming an end-to-end deep learning system, which can jointly learn the noise distribution and CNN parameters.

DAT: Training Deep Networks Robust to Label-Noise by Matching the Feature Distributions

The DAT method is proposed, which is the first to address the noisy label problem from the perspective of the feature distribution, and can consistently outperform other state-of-the-art methods.

The Dynamic of Consensus in Deep Networks and the Identification of Noisy Labels

A new empirical result is reported: for each example, when looking at the time it has been memorized by each model in an ensemble of networks, the diversity seen in noisy examples is much larger than the clean examples.

Learning to Learn From Noisy Labeled Data

This work proposes a noise-tolerant training algorithm, where a meta-learning update is performed prior to conventional gradient update, and trains the model such that after one gradient update using each set of synthetic noisy labels, the model does not overfit to the specific noise.

A Spectral Perspective of DNN Robustness to Label Noise

This work relates the smoothness regularization that usually exists in conventional training to the attenuation of high frequencies, which mainly character-ize noise, and suggests that one may further improve robustness via spectral normalization.

Training Robust Deep Neural Networks on Noisy Labels Using Adaptive Sample Selection with Disagreement

An adaptive sample selection method to train deep neural networks robustly and prevent noise contamination in the disagreement strategy is proposed and improves generalization performance in an image classification task with simulated noise rates of up to 50%.

JoSDW: Combating Noisy Labels by Dynamic Weight

A small loss sample selection strategy with dynamic weight is designed that increases the proportion of agreement based on network predictions, gradually reduces the weight of the complex sample, and increases the Weight of the pure sample at the same time.

Noisy Labels Can Induce Good Representations

It is observed that if an architecture “suits” the task, training with noisy labels can induce useful hidden representations, even when the model generalizes poorly; i.e., the last few layers of the model are more negatively affected by noisy labels.
...

References

SHOWING 1-10 OF 21 REFERENCES

Training deep neural-networks based on unreliable labels

This study introduces an extra noise layer by assuming that the observed labels were created from the true labels by passing through a noisy channel whose parameters are unknown, and proposes a method that simultaneously learns both the neural network parameters and the noise distribution.

Learning from Noisy Labels with Deep Neural Networks

A novel way of modifying deep learning models so they can be effectively trained on data with high level of label noise is proposed, and it is shown that random images without labels can improve the classification performance.

Training Deep Neural Networks on Noisy Labels with Bootstrapping

A generic way to handle noisy and incomplete labeling by augmenting the prediction objective with a notion of consistency is proposed, which considers a prediction consistent if the same prediction is made given similar percepts, where the notion of similarity is between deep network features computed from the input data.

Learning with Noisy Labels

The problem of binary classification in the presence of random classification noise is theoretically studied—the learner sees labels that have independently been flipped with some small probability, and methods used in practice such as biased SVM and weighted logistic regression are provably noise-tolerant.

Distilling the Knowledge in a Neural Network

This work shows that it can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model and introduces a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse.

Class Noise vs. Attribute Noise: A Quantitative Study

A systematic evaluation on the effect of noise in machine learning separates noise into two categories: class noise and attribute noise, and investigates the relationship between attribute noise and classification accuracy, the impact of noise at different attributes, and possible solutions in handling attribute noise.

Learning Multiple Layers of Features from Tiny Images

It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.

Label-Noise Robust Logistic Regression and Its Applications

A label-noise robust version of the logistic regression and multinomiallogistic regression classifiers is considered and a novel sparsity-promoting regularisation approach is developed which allows us to tackle challenging high dimensional noisy settings.

Adam: A Method for Stochastic Optimization

This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.

A comprehensive introduction to label noise

This paper provides a concise and comprehensive introduction to this research topic and reviews the types of label noise, their consequences and a number of state of the art approaches to deal with label noise.