Corpus ID: 54440977

SentiNet: Detecting Physical Attacks Against Deep Learning Systems

@article{Chou2018SentiNetDP,
  title={SentiNet: Detecting Physical Attacks Against Deep Learning Systems},
  author={Edward Chou and Florian Tram{\`e}r and Giancarlo Pellegrino and Dan Boneh},
  journal={ArXiv},
  year={2018},
  volume={abs/1812.00292}
}
  • Edward Chou, Florian Tramèr, +1 author Dan Boneh
  • Published 2018
  • Computer Science
  • ArXiv
  • SentiNet is a novel detection framework for physical attacks on neural networks, a class of attacks that constrains an adversarial region to a visible portion of an image. [...] Key Result We demonstrate the effectiveness of SentiNet on three different attacks - i.e., adversarial examples, data poisoning attacks, and trojaned networks - that have large variations in deployment mechanisms, and show that our defense is able to achieve very competitive performance metrics for all three threats, even against strong…Expand Abstract

    Citations

    Publications citing this paper.
    SHOWING 1-10 OF 36 CITATIONS

    STRIP: a defence against trojan attacks on deep neural networks

    VIEW 12 EXCERPTS
    CITES METHODS & BACKGROUND
    HIGHLY INFLUENCED

    Defending Against Physically Realizable Attacks on Image Classification

    VIEW 1 EXCERPT
    CITES BACKGROUND

    DEFENDING AGAINST PHYSICALLY REALIZABLE AT-

    • ON TACKS
    • 2019
    VIEW 1 EXCERPT
    CITES BACKGROUND

    Detecting Patch Adversarial Attacks with Image Residuals

    VIEW 1 EXCERPT
    CITES BACKGROUND

    AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning

    VIEW 2 EXCERPTS

    FILTER CITATIONS BY YEAR

    2019
    2020

    CITATION STATISTICS

    • 6 Highly Influenced Citations

    • Averaged 12 Citations per year from 2018 through 2020

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 52 REFERENCES

    Physical Adversarial Examples for Object Detectors

    VIEW 3 EXCERPTS

    Robust Physical-World Attacks on Machine Learning Models

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    MagNet: A Two-Pronged Defense against Adversarial Examples

    • Dongyu Meng, Hao Chen
    • Computer Science
    • ACM Conference on Computer and Communications Security
    • 2017

    Adversarial Examples: Attacks and Defenses for Deep Learning

    VIEW 1 EXCERPT