• Corpus ID: 245827937

GenLabel: Mixup Relabeling using Generative Models

@article{Sohn2022GenLabelMR,
  title={GenLabel: Mixup Relabeling using Generative Models},
  author={Jy-yong Sohn and Liang Shang and Hongxu Chen and Jaekyun Moon and Dimitris Papailiopoulos and Kangwook Lee},
  journal={ArXiv},
  year={2022},
  volume={abs/2201.02354}
}
Mixup is a data augmentation method that generates new data points by mixing a pair of input data. While mixup generally improves the prediction performance, it sometimes degrades the performance. In this paper, we first identify the main causes of this phenomenon by theoretically and empirically analyzing the mixup algorithm. To resolve this, we propose GenLabel , a simple yet effective relabeling algorithm designed for mixup. In particular, GenLabel helps the mixup algorithm correctly label… 
LIFT: Language-Interfaced Fine-Tuning for Non-Language Machine Learning Tasks
TLDR
The proposed Language-Interfaced Fine-Tuning (LIFT) does not make any changes to the model architecture or loss function, and it solely relies on the natural language interface, enabling “no-code machine learning with LMs,” and performs relatively well across a wide range of lowdimensional classification and regression tasks.

References

SHOWING 1-10 OF 67 REFERENCES
How Does Mixup Help With Robustness and Generalization?
TLDR
It is shown that minimizing the Mixup loss corresponds to approximately minimizing an upper bound of the adversarial loss, which explains why models obtained by Mixup training exhibits robustness to several kinds of adversarial attacks such as Fast Gradient Sign Method.
k-Mixup Regularization for Deep Learning via Optimal Transport
TLDR
This work demonstrates theoretically and in simulations that k-mixup preserves cluster and manifold structures, and extends theory studying efficacy of standard mixup to kmixup, and shows that training with k- Mixup further improves generalization and robustness on benchmark datasets.
mixup: Beyond Empirical Risk Minimization
TLDR
This work proposes mixup, a simple learning principle that trains a neural network on convex combinations of pairs of examples and their labels, which improves the generalization of state-of-the-art neural network architectures.
Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup
TLDR
The experiments show Puzzle Mix achieves the state of the art generalization and the adversarial robustness results compared to other mixup methods on CIFAR-100, Tiny-ImageNet, and ImageNet datasets.
MixUp as Locally Linear Out-Of-Manifold Regularization
TLDR
An understanding is developed for MixUp as a form of “out-of-manifold regularization”, which imposes certain “local linearity” constraints on the model’s input space beyond the data manifold, which enables a novel adaptive version of MixUp, where the mixing policies are automatically learned from the data using an additional network and objective function designed to avoid manifold intrusion.
Data Interpolating Prediction: Alternative Interpretation of Mixup
TLDR
This work derives the generalization bound and shows that DIP helps to reduce the original Rademacher complexity, and empirically demonstrate that Dip can outperform existing Mixup.
Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity
TLDR
A new perspective on batch mixup is proposed and the optimal construction of a batch of mixup data is formulated maximizing the data saliency measure of each individual mixupData and encouraging the supermodular diversity among the constructed mix up data.
Inverting the Generator of a Generative Adversarial Network
TLDR
This paper introduces a technique, inversion, to project data samples, specifically images, to the latent space using a pretrained GAN, and demonstrates how the proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets.
On Mixup Regularization
TLDR
It is shown that Mixup can be interpreted as standard empirical risk minimization estimator subject to a combination of data transformation and random perturbation of the transformed data, and that these transformations and perturbations induce multiple known regularization schemes that interact synergistically with each other, resulting in a self calibrated and effective regularization effect that prevents overfitting and overconfident predictions.
Manifold Mixup: Better Representations by Interpolating Hidden States
TLDR
Manifold Mixup, a simple regularizer that encourages neural networks to predict less confidently on interpolations of hidden representations, improves strong baselines in supervised learning, robustness to single-step adversarial attacks, and test log-likelihood.
...
...