• Corpus ID: 226254079

Data Augmentation via Structured Adversarial Perturbations

@article{Luo2020DataAV,
  title={Data Augmentation via Structured Adversarial Perturbations},
  author={Calvin Luo and Hossein Mobahi and Samy Bengio},
  journal={ArXiv},
  year={2020},
  volume={abs/2011.03010}
}
Data augmentation is a major component of many machine learning methods with state-of-the-art performance. Common augmentation strategies work by drawing random samples from a space of transformations. Unfortunately, such sampling approaches are limited in expressivity, as they are unable to scale to rich transformations that depend on numerous parameters due to the curse of dimensionality. Adversarial examples can be considered as an alternative scheme for data augmentation. By being trained… 

Figures and Tables from this paper

Semantic Perturbations with Normalizing Flows for Improved Generalization
TLDR
It is found that the latent adversarial perturbations adaptive to the classifier throughout its training are most effective, yielding the first test accuracy improvement results on real-world datasets—CIFAR-10/100—via latent-space perturbation.
Using self-supervision and augmentations to build insights into neural coding
TLDR
Recent progress in the application of self-supervised learning to data analysis in neuroscience is highlighted, the implications of these results are discussed, and ways in which SSL might be applied to reveal interesting properties of neural computation are suggested.

References

SHOWING 1-10 OF 54 REFERENCES
Generating Natural Adversarial Examples
TLDR
This paper proposes a framework to generate natural and legible adversarial examples that lie on the data manifold, by searching in semantic space of dense and continuous data representation, utilizing the recent advances in generative adversarial networks.
Constructing Unrestricted Adversarial Examples with Generative Models
TLDR
The empirical results on the MNIST, SVHN, and CelebA datasets show that unrestricted adversarial examples can bypass strong adversarial training and certified defense methods designed for traditional adversarial attacks.
Adversarial Feature Augmentation for Unsupervised Domain Adaptation
TLDR
This work forces the learned feature extractor to be domain-invariant, and trains it through data augmentation in the feature space, namely performing feature augmentation, by means of a feature generator trained by playing the GAN minimax game against source features.
Adversarial AutoAugment
TLDR
An adversarial method to arrive at a computationally-affordable solution called Adversarial AutoAugment, which can simultaneously optimize target related object and augmentation policy search loss and demonstrate significant performance improvements over state-of-the-art.
Learning to Compose Domain-Specific Transformations for Data Augmentation
TLDR
The proposed method can make use of arbitrary, non-deterministic transformation functions, is robust to misspecified user input, and is trained on unlabeled data, which can be used to perform data augmentation for any end discriminative model.
Big but Imperceptible Adversarial Perturbations via Semantic Manipulation
TLDR
Two novel methods are proposed, tAdv and cAdv, which leverage texture transfer and colorization to generate natural perturbation with a large $\mathcal{L}_p$ norm, which are general enough to attack both image classification and image captioning tasks on ImageNet and MSCOCO dataset.
Semantic Adversarial Examples
TLDR
This paper introduces a new class of adversarial examples, namely "Semantic Adversarial Examples," as images that are arbitrarily perturbed to fool the model, but in such a way that the modified image semantically represents the same object as the original image.
Unrestricted Adversarial Examples via Semantic Manipulation
TLDR
This paper introduces "unrestricted" perturbations that manipulate semantically meaningful image-based visual descriptors -- color and texture -- in order to generate effective and photorealistic adversarial examples.
MA3: Model Agnostic Adversarial Augmentation for Few Shot learning
TLDR
This paper explores the domain of few-shot learning with a novel augmentation technique that proposes to learn the probability distribution over the image transformation parameters which is easier and quicker to learn.
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
...
...