• Corpus ID: 226254079

# Data Augmentation via Structured Adversarial Perturbations

@article{Luo2020DataAV,
title={Data Augmentation via Structured Adversarial Perturbations},
author={Calvin Luo and Hossein Mobahi and Samy Bengio},
journal={ArXiv},
year={2020},
volume={abs/2011.03010}
}
• Published 5 November 2020
• Computer Science
• ArXiv
Data augmentation is a major component of many machine learning methods with state-of-the-art performance. Common augmentation strategies work by drawing random samples from a space of transformations. Unfortunately, such sampling approaches are limited in expressivity, as they are unable to scale to rich transformations that depend on numerous parameters due to the curse of dimensionality. Adversarial examples can be considered as an alternative scheme for data augmentation. By being trained…
2 Citations

## Figures and Tables from this paper

Semantic Perturbations with Normalizing Flows for Improved Generalization
• Computer Science
2021 IEEE/CVF International Conference on Computer Vision (ICCV)
• 2021
It is found that the latent adversarial perturbations adaptive to the classifier throughout its training are most effective, yielding the first test accuracy improvement results on real-world datasets—CIFAR-10/100—via latent-space perturbation.
Using self-supervision and augmentations to build insights into neural coding
Recent progress in the application of self-supervised learning to data analysis in neuroscience is highlighted, the implications of these results are discussed, and ways in which SSL might be applied to reveal interesting properties of neural computation are suggested.

## References

SHOWING 1-10 OF 54 REFERENCES
• Computer Science
ICLR
• 2018
This paper proposes a framework to generate natural and legible adversarial examples that lie on the data manifold, by searching in semantic space of dense and continuous data representation, utilizing the recent advances in generative adversarial networks.
Constructing Unrestricted Adversarial Examples with Generative Models
• Computer Science
NeurIPS
• 2018
The empirical results on the MNIST, SVHN, and CelebA datasets show that unrestricted adversarial examples can bypass strong adversarial training and certified defense methods designed for traditional adversarial attacks.
• Computer Science
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
• 2018
This work forces the learned feature extractor to be domain-invariant, and trains it through data augmentation in the feature space, namely performing feature augmentation, by means of a feature generator trained by playing the GAN minimax game against source features.
• Computer Science
ICLR
• 2020
An adversarial method to arrive at a computationally-affordable solution called Adversarial AutoAugment, which can simultaneously optimize target related object and augmentation policy search loss and demonstrate significant performance improvements over state-of-the-art.
Learning to Compose Domain-Specific Transformations for Data Augmentation
• Computer Science
NIPS
• 2017
The proposed method can make use of arbitrary, non-deterministic transformation functions, is robust to misspecified user input, and is trained on unlabeled data, which can be used to perform data augmentation for any end discriminative model.
Big but Imperceptible Adversarial Perturbations via Semantic Manipulation
• Computer Science
ArXiv
• 2019
Two novel methods are proposed, tAdv and cAdv, which leverage texture transfer and colorization to generate natural perturbation with a large $\mathcal{L}_p$ norm, which are general enough to attack both image classification and image captioning tasks on ImageNet and MSCOCO dataset.
• Computer Science
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
• 2018
This paper introduces a new class of adversarial examples, namely "Semantic Adversarial Examples," as images that are arbitrarily perturbed to fool the model, but in such a way that the modified image semantically represents the same object as the original image.
Unrestricted Adversarial Examples via Semantic Manipulation
• Computer Science
ICLR
• 2020
This paper introduces "unrestricted" perturbations that manipulate semantically meaningful image-based visual descriptors -- color and texture -- in order to generate effective and photorealistic adversarial examples.
MA3: Model Agnostic Adversarial Augmentation for Few Shot learning
• Computer Science
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
• 2020
This paper explores the domain of few-shot learning with a novel augmentation technique that proposes to learn the probability distribution over the image transformation parameters which is easier and quicker to learn.