Corpus ID: 232240244

Reweighting Augmented Samples by Minimizing the Maximal Expected Loss

@article{Yi2021ReweightingAS,
  title={Reweighting Augmented Samples by Minimizing the Maximal Expected Loss},
  author={Mingyang Yi and Lu Hou and Lifeng Shang and Xin Jiang and Qun Liu and Zhi-Ming Ma},
  journal={ArXiv},
  year={2021},
  volume={abs/2103.08933}
}
Data augmentation is an effective technique to improve the generalization of deep neural networks. However, previous data augmentation methods usually treat the augmented samples equally without considering their individual impacts on the model. To address this, for the augmented samples from the same training example, we propose to assign different weights to them. We construct the maximal expected loss which is the supremum over any reweighted loss on augmented samples. Inspired by… Expand

Figures and Tables from this paper

Not Far Away, Not So Close: Sample Efficient Nearest Neighbour Data Augmentation via MiniMax
TLDR
This work introduces MiniMax-kNN, a sample efficient data augmentation strategy tailored for Knowledge Distillation (KD), and exploits a semi-supervised approach based on KD to train a model on augmented data. Expand

References

SHOWING 1-10 OF 48 REFERENCES
A Bayesian Data Augmentation Approach for Learning Deep Models
TLDR
A novel Bayesian formulation to data augmentation is provided, where new annotated training points are treated as missing variables and generated based on the distribution learned from the training set, and this approach produces better classification results than similar GAN models. Expand
Learning to Reweight Examples for Robust Deep Learning
TLDR
This work proposes a novel meta-learning algorithm that learns to assign weights to training examples based on their gradient directions that can be easily implemented on any type of deep network, does not require any additional hyperparameter tuning, and achieves impressive performance on class imbalance and corrupted label problems where only a small amount of clean validation data is available. Expand
FreeLB: Enhanced Adversarial Training for Natural Language Understanding
TLDR
A novel adversarial training algorithm is proposed, FreeLB, that promotes higher invariance in the embedding space, by adding adversarial perturbations to word embeddings and minimizing the resultant adversarial risk inside different regions around input samples. Expand
Not All Samples Are Created Equal: Deep Learning with Importance Sampling
TLDR
A principled importance sampling scheme is proposed that focuses computation on "informative" examples, and reduces the variance of the stochastic gradients during training, and derives a tractable upper bound to the per-sample gradient norm. Expand
Unsupervised Data Augmentation for Consistency Training
TLDR
A new perspective on how to effectively noise unlabeled examples is presented and it is argued that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning. Expand
ADA: Adversarial Data Augmentation for Object Detection
TLDR
This work aims to find an optimal adversarial perturbations of the ground truth data that forces the object bounding box predictor to learn from the hardest distribution of perturbed examples for better test-time performance and establishes that the game-theoretic solution (Nash equilibrium) provides both an optimal predictor and optimal data augmentation distribution. Expand
AutoAugment: Learning Augmentation Policies from Data
TLDR
This paper describes a simple procedure called AutoAugment to automatically search for improved data augmentation policies, which achieves state-of-the-art accuracy on CIFAR-10, CIFar-100, SVHN, and ImageNet (without additional data). Expand
Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting
TLDR
Synthetic and real experiments substantiate the capability of the method for achieving proper weighting functions in class imbalance and noisy label cases, fully complying with the common settings in traditional methods, and more complicated scenarios beyond conventional cases. Expand
Improved Regularization of Convolutional Neural Networks with Cutout
TLDR
This paper shows that the simple regularization technique of randomly masking out square regions of input during training, which is called cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Expand
MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels
TLDR
Experimental results demonstrate that the proposed novel technique of learning another neural network, called MentorNet, to supervise the training of the base deep networks, namely, StudentNet, can significantly improve the generalization performance of deep networks trained on corrupted training data. Expand
...
1
2
3
4
5
...