Corpus ID: 92983361

Adversarial camera stickers: A physical camera-based attack on deep learning systems

@article{Li2019AdversarialCS,
  title={Adversarial camera stickers: A physical camera-based attack on deep learning systems},
  author={Juncheng Li and Frank R. Schmidt and J. Zico Kolter},
  journal={ArXiv},
  year={2019},
  volume={abs/1904.00759}
}
  • Juncheng Li, Frank R. Schmidt, J. Zico Kolter
  • Published 2019
  • Computer Science, Mathematics
  • ArXiv
  • Recent work has documented the susceptibility of deep learning systems to adversarial examples, but most such attacks directly manipulate the digital input to a classifier. Although a smaller line of work considers physical adversarial attacks, in all cases these involve manipulating the object of interest, e.g., putting a physical sticker on an object to misclassify it, or manufacturing an object specifically intended to be misclassified. In this work, we consider an alternative question: is… CONTINUE READING

    Figures, Tables, and Topics from this paper.

    Citations

    Publications citing this paper.
    SHOWING 1-10 OF 17 CITATIONS

    Evading Real-Time Person Detectors by Adversarial T-shirt

    VIEW 2 EXCERPTS
    CITES METHODS & BACKGROUND

    Countering Adversarial Examples by Means of Steganographic Attacks

    VIEW 1 EXCERPT
    CITES BACKGROUND

    Defending Against Universal Perturbations With Shared Adversarial Training

    VIEW 1 EXCERPT
    CITES BACKGROUND

    PPD: Permutation Phase Defense Against Adversarial Examples in Deep Learning

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 23 REFERENCES

    Adversarial Attacks Beyond the Image Space

    VIEW 1 EXCERPT

    Adversarial examples in the physical world

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    One Pixel Attack for Fooling Deep Neural Networks

    VIEW 1 EXCERPT

    Synthesizing Robust Adversarial Examples

    VIEW 10 EXCERPTS
    HIGHLY INFLUENTIAL

    Universal Adversarial Perturbations

    VIEW 3 EXCERPTS