• Corpus ID: 237431056

Learning Fast Sample Re-weighting Without Reward Data

@article{Zhang2021LearningFS,
  title={Learning Fast Sample Re-weighting Without Reward Data},
  author={Zizhao Zhang and Tomas Pfister},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.03216}
}
Training sample re-weighting is an effective approach for tackling data biases such as imbalanced and corrupted labels. Recent methods develop learning-based algorithms to learn sample re-weighting strategies jointly with model training based on the frameworks of reinforcement learning and meta learning. However, depending on additional unbiased reward data is limiting their general applicability. Furthermore, existing learning-based sample re-weighting methods require nested optimizations of… 
Which Samples Should be Learned First: Easy or Hard?
TLDR
An effective and universal solution is proposed to select the optimal priority mode when there is no prior knowledge or theoretical clues, and four modes, namely, easy/medium/hard/two-ends-first, can be flexibly switched in the proposed solution.
Deep Long-Tailed Learning: A Survey
TLDR
A comprehensive survey on recent advances in deep long-tailed learning is provided, highlighting important applications of deepLongtailed learning and identifying several promising directions for future research.
Multiplicative Reweighting for Robust Neural Network Optimization
TLDR
This work establishes the convergence of the method when used with gradient descent and shows its advantage in two simple examples, and proves that MW improves networks accuracy in the presence of label noise, and leads to better robustness to adversarial attacks.

References

SHOWING 1-10 OF 70 REFERENCES
Learning Data Manipulation for Augmentation and Weighting
TLDR
This work builds upon a recent connection of supervised learning and reinforcement learning, and adapts an off-the-shelf reward learning algorithm from RL for joint data manipulation learning and model training and shows the resulting algorithms significantly improve the image and text classification performance in low data regime and class-imbalance problems.
Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting
TLDR
Synthetic and real experiments substantiate the capability of the method for achieving proper weighting functions in class imbalance and noisy label cases, fully complying with the common settings in traditional methods, and more complicated scenarios beyond conventional cases.
Learning to Reweight Examples for Robust Deep Learning
TLDR
This work proposes a novel meta-learning algorithm that learns to assign weights to training examples based on their gradient directions that can be easily implemented on any type of deep network, does not require any additional hyperparameter tuning, and achieves impressive performance on class imbalance and corrupted label problems where only a small amount of clean validation data is available.
What is the Effect of Importance Weighting in Deep Learning?
TLDR
The surprising finding that while importance weighting impacts models early in training, its effect diminishes over successive epochs is presented.
Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss
TLDR
A theoretically-principled label-distribution-aware margin (LDAM) loss motivated by minimizing a margin-based generalization bound is proposed that replaces the standard cross-entropy objective during training and can be applied with prior strategies for training with class-imbalance such as re-weighting or re-sampling.
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning
Data Valuation using Reinforcement Learning
TLDR
The corrupted sample discovery performance of DVRL is close to optimal in many regimes, and for domain adaptation and robust learning DVRL significantly outperforms state-of-the-art by 14.6% and 10.8%, respectively.
Decoupling Representation and Classifier for Long-Tailed Recognition
TLDR
It is shown that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification.
On First-Order Meta-Learning Algorithms
TLDR
A family of algorithms for learning a parameter initialization that can be fine-tuned quickly on a new task, using only first-order derivatives for the meta-learning updates, including Reptile, which works by repeatedly sampling a task, training on it, and moving the initialization towards the trained weights on that task.
Learning to Learn From Noisy Labeled Data
TLDR
This work proposes a noise-tolerant training algorithm, where a meta-learning update is performed prior to conventional gradient update, and trains the model such that after one gradient update using each set of synthetic noisy labels, the model does not overfit to the specific noise.
...
1
2
3
4
5
...