• Corpus ID: 220632874

Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup

@article{Kim2020PuzzleME,
  title={Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup},
  author={Jang-Hyun Kim and Wonho Choo and Hyun Oh Song},
  journal={ArXiv},
  year={2020},
  volume={abs/2009.06962}
}
While deep neural networks achieve great performance on fitting the training distribution, the learned networks are prone to overfitting and are susceptible to adversarial attacks. In this regard, a number of mixup based augmentation methods have been recently proposed. However, these approaches mainly focus on creating previously unseen virtual examples and can sometimes provide misleading supervisory signal to the network. To this end, we propose Puzzle Mix, a mixup method for explicitly… 
Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing
TLDR
The proposed Mixup-variant outperforms the current state-of-the-art augmentation strategies not only in terms of classification accuracy, but is also superior in coping under stress conditions such as data corruption and object occlusion.
Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity
TLDR
A new perspective on batch mixup is proposed and the optimal construction of a batch of mixup data is formulated maximizing the data saliency measure of each individual mixupData and encouraging the supermodular diversity among the constructed mix up data.
SALIENCY GRAFTING: INNOCUOUS ATTRIBUTION-
TLDR
Saliency Grafting is presented, a novel Mixup-like data augmentation method that outperforms the current state-of-the-art augmentation strategies not only in terms of classification accuracy, but is also superior in coping under stress conditions such as data corruption and data scarcity.
StyleMix: Separating Content and Style for Enhanced Data Augmentation
  • Minui Hong, Jinwoo Choi, Gunhee Kim
  • Computer Science
    2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2021
TLDR
StyleMix and StyleCutMix are proposed as the first mixup method that separately manipulates the content and style information of input image pairs and an automatic scheme to decide the degree of style mixing according to the pair’s class distance is developed to prevent messy mixed images from too differently styled pairs.
Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup
TLDR
This work hypothesize and verify the core objective of mixup generation as optimizing the local smoothness between two classes subject to global discrimination from other classes and proposes Scenario-Agostic Mixup, named SAMix, which consistently outperforms leading methods by a large margin.
k-Mixup Regularization for Deep Learning via Optimal Transport
TLDR
This work demonstrates theoretically and in simulations that k-mixup preserves cluster and manifold structures, and extends theory studying efficacy of standard mixup to kmixup, and shows that training with k- Mixup further improves generalization and robustness on benchmark datasets.
How Does Mixup Help With Robustness and Generalization?
TLDR
It is shown that minimizing the Mixup loss corresponds to approximately minimizing an upper bound of the adversarial loss, which explains why models obtained by Mixup training exhibits robustness to several kinds of adversarial attacks such as Fast Gradient Sign Method.
Observations on K-image Expansion of Image-Mixing Augmentation for Classification
TLDR
A new K-image mixing augmentation based on the stick-breaking process under Dirichlet prior is derived that can train more robust and generalized classifiers through extensive experiments and analysis on classification accuracy, a shape of a loss landscape and adversarial robustness, than the usual two-image methods.
Preventing Manifold Intrusion with Locality: Local Mixup
TLDR
In constrained settings it is demonstrated that Local Mixup can create a trade-off between bias and variance, with the extreme cases reducing to vanilla training and classical Mixup.
SALIENCYMIX: A SALIENCY GUIDED DATA AUG-
TLDR
This work proposes SaliencyMix, a proposed data augmentation strategy that carefully selects a representative image patch with the help of a saliency map and mixes this indicativepatch with the target image, thus leading the model to learn more appropriate feature representation.
...
...

References

SHOWING 1-10 OF 43 REFERENCES
mixup: Beyond Empirical Risk Minimization
TLDR
This work proposes mixup, a simple learning principle that trains a neural network on convex combinations of pairs of examples and their labels, which improves the generalization of state-of-the-art neural network architectures.
Manifold Mixup: Better Representations by Interpolating Hidden States
TLDR
Manifold Mixup, a simple regularizer that encourages neural networks to predict less confidently on interpolations of hidden representations, improves strong baselines in supervised learning, robustness to single-step adversarial attacks, and test log-likelihood.
MixUp as Locally Linear Out-Of-Manifold Regularization
TLDR
An understanding is developed for MixUp as a form of “out-of-manifold regularization”, which imposes certain “local linearity” constraints on the model’s input space beyond the data manifold, which enables a novel adaptive version of MixUp, where the mixing policies are automatically learned from the data using an additional network and objective function designed to avoid manifold intrusion.
CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features
TLDR
Patches are cut and pasted among training images where the ground truth labels are also mixed proportionally to the area of the patches, and CutMix consistently outperforms state-of-the-art augmentation strategies on CIFAR and ImageNet classification tasks, as well as on ImageNet weakly-supervised localization task.
Improved Regularization of Convolutional Neural Networks with Cutout
TLDR
This paper shows that the simple regularization technique of randomly masking out square regions of input during training, which is called cutout, can be used to improve the robustness and overall performance of convolutional neural networks.
AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty
TLDR
AugMix significantly improves robustness and uncertainty measures on challenging image classification benchmarks, closing the gap between previous methods and the best possible performance in some cases by more than half.
Fast is better than free: Revisiting adversarial training
TLDR
It is made the surprising discovery that it is possible to train empirically robust models using a much weaker and cheaper adversary, an approach that was previously believed to be ineffective, rendering the method no more costly than standard training in practice.
Learning Deep Features for Discriminative Localization
In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network (CNN) to have remarkable localization ability
Saliency detection by multi-context deep learning
TLDR
This paper proposes a multi-context deep learning framework for salient object detection that employs deep Convolutional Neural Networks to model saliency of objects in images and investigates different pre-training strategies to provide a better initialization for training the deep neural networks.
Understanding deep learning requires rethinking generalization
TLDR
These experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data, and confirm that simple depth two neural networks already have perfect finite sample expressivity.
...
...