• Corpus ID: 246823955

CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep Learning

@article{Shu2022CMWNetLA,
  title={CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep Learning},
  author={Jun Shu and Xiang Yuan and Deyu Meng and Zongben Xu},
  journal={ArXiv},
  year={2022},
  volume={abs/2202.05613}
}
Modern deep neural networks (DNNs) can easily overfit to biased training data containing corrupted labels or class imbalance. Sample re-weighting methods are popularly used to alleviate this data bias issue. Most current methods, however, require to manually pre-specify the weighting schemes as well as their additional hyper-parameters relying on the characteristics of the investigated problem and training data. This makes them fairly hard to be generally applied in practical scenarios, due to… 

Dynamic Loss For Robust Learning

A novel meta-learning based dynamic loss thatatically adjusts the objective functions with the training process to robustly learn a classifier from long-tailed noisy data to well adapt to clean and balanced test data.

References

SHOWING 1-10 OF 117 REFERENCES

Learning with Feature-Dependent Label Noise: A Progressive Approach

This paper proposes a progressive label correction algorithm that iteratively corrects labels and refines the model and provides theoretical guarantees showing that for a wide variety of (unknown) noise patterns, a classifier trained with this strategy converges to be consistent with the Bayes classifier.

Contrast to Divide: Self-Supervised Pre-Training for Learning with Noisy Labels

An in-depth analysis of the proposed C2D framework is performed, including investigating the performance of different pre-training approaches and estimating the effective upper bound of the LNL performance with semi-supervised learning.

SELFIE: Refurbishing Unclean Samples for Robust Deep Learning

This work proposes a novel robust training method called SELFIE, which selectively refurbish and exploit unclean samples that can be corrected with high precision, thereby gradually increasing the number of available training samples.

Large-Scale Long-Tailed Recognition in an Open World

An integrated OLTR algorithm is developed that maps an image to a feature space such that visual concepts can easily relate to each other based on a learned metric that respects the closed-world classification while acknowledging the novelty of the open world.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

ImageNet: A large-scale hierarchical image database

A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.

Early-Learning Regularization Prevents Memorization of Noisy Labels

It is proved that early learning and memorization are fundamental phenomena in high-dimensional classification tasks, even in simple linear models, and a new technique for noisy classification tasks is developed, which exploits the progress of the early learning phase.

Progressive Identification of True Labels for Partial-Label Learning

A novel estimator of the classification risk, theoretically analyze the classifier-consistency, and establish an estimation error bound are proposed, and a progressive identification algorithm for approximately minimizing the proposed risk estimator is proposed.

DivideMix: Learning with Noisy Labels as Semi-supervised Learning

This work proposes DivideMix, a novel framework for learning with noisy labels by leveraging semi-supervised learning techniques, which models the per-sample loss distribution with a mixture model to dynamically divide the training data into a labeled set with clean samples and an unlabeled set with noisy samples.

Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting

Synthetic and real experiments substantiate the capability of the method for achieving proper weighting functions in class imbalance and noisy label cases, fully complying with the common settings in traditional methods, and more complicated scenarios beyond conventional cases.
...