• Corpus ID: 211020636

A Differentiable Color Filter for Generating Unrestricted Adversarial Images

@article{Zhao2020ADC,
  title={A Differentiable Color Filter for Generating Unrestricted Adversarial Images},
  author={Zhengyu Zhao and Zhuoran Liu and Martha Larson},
  journal={ArXiv},
  year={2020},
  volume={abs/2002.01008}
}
We propose Adversarial Color Filtering (AdvCF), an approach that uses a differentiable color filter to create adversarial images. The color filter allows us to introduce large perturbations into images, while still maintaining or enhancing their photographic quality and appeal. AdvCF is motivated by properties that are necessary if adversarial images are to be used to protect the content of images shared online from unethical machine learning classifiers: First, perturbations must be… 

Figures and Tables from this paper

Adversarial Training against Location-Optimized Adversarial Patches

This work first devise a practical approach to obtain adversarial patches while actively optimizing their location within the image, then applies adversarial training on these location-optimized adversarial patched and demonstrates significantly improved robustness on CIFAR10 and GTSRB.

References

SHOWING 1-10 OF 22 REFERENCES

Semantic Adversarial Examples

This paper introduces a new class of adversarial examples, namely "Semantic Adversarial Examples," as images that are arbitrarily perturbed to fool the model, but in such a way that the modified image semantically represents the same object as the original image.

ColorFool: Semantic Adversarial Colorization

This paper proposes a content-based black-box adversarial attack that generates unrestricted perturbations by exploiting image semantics to selectively modify colors within chosen ranges that are perceived as natural by humans and outperforms in terms of success rate, robustness to defense frameworks and transferability.

Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers

This paper proposes a novel approach to generate semantic adversarial examples by optimizing a particular adversarial loss over the range-space of a parametric conditional generative model, and demonstrates implementations of this approach on binary classifiers trained on face images.

A General Framework for Adversarial Examples with Objectives

This article proposes adversarial generative nets (AGNs), a general methodology to train a generator neural network to emit adversarial examples satisfying desired objectives, and demonstrates the ability of AGNs to accommodate a wide range of objectives, including imprecise ones difficult to model, in two application domains.

ADef: an Iterative Algorithm to Construct Adversarial Deformations

The ADef algorithm is proposed to construct a different kind of adversarial attack created by iteratively applying small deformations to the image, found through a gradient descent step.

Spatially Transformed Adversarial Examples

Perturbations generated through spatial transformation could result in large $\mathcal{L}_p$ distance measures, but the extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems.

Functional Adversarial Attacks

It is shown that functional threat models can be combined with existing additive ($\ell_p$) threat models to generate stronger threat models that allow both small, individual perturbations and large, uniform changes to an input.

SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing

An algorithm is proposed which leverages disentangled semantic factors to generate adversarial perturbation by altering controlled semantic attributes to fool the learner towards various "adversarial" targets.

On the Suitability of Lp-Norms for Creating and Preventing Adversarial Examples

It is demonstrated that nearness of inputs as measured by Lp-norms is neither necessary nor sufficient for perceptual similarity, which has implications for both creating and defending against adversarial examples.

Pixel Privacy: Increasing Image Appeal while Blocking Automatic Inference of Sensitive Scene Information

A new privacy task focused on images that users share online, and the focus is on a set of 60 scene categories, selected from the Places365-Standard dataset, that can be considered privacy-sensitive.