Corpus ID: 3458858

Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models

@article{Samangouei2018DefenseGANPC,
  title={Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models},
  author={Pouya Samangouei and Maya Kabkab and Rama Chellappa},
  journal={ArXiv},
  year={2018},
  volume={abs/1805.06605}
}
  • Pouya Samangouei, Maya Kabkab, Rama Chellappa
  • Published in ICLR 2018
  • Computer Science, Mathematics
  • In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification. However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images. We propose Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against such attacks. Defense-GAN is trained to model the distribution of… CONTINUE READING

    Create an AI-powered research feed to stay up to date with new papers like this posted to ArXiv

    Citations

    Publications citing this paper.
    SHOWING 1-10 OF 311 CITATIONS

    Minimax Defense against Gradient-based Adversarial Attacks

    VIEW 7 EXCERPTS
    CITES METHODS, BACKGROUND & RESULTS
    HIGHLY INFLUENCED

    Adversarial Examples in Modern Machine Learning: A Review

    VIEW 10 EXCERPTS
    CITES BACKGROUND & RESULTS
    HIGHLY INFLUENCED

    Improving the Robustness of Deep Neural Networks via Adversarial Training with Triplet Loss

    VIEW 13 EXCERPTS
    CITES BACKGROUND & METHODS
    HIGHLY INFLUENCED

    MimicGAN: Robust Projection onto Image Manifolds with Corruption Mimicking

    VIEW 15 EXCERPTS
    CITES BACKGROUND, METHODS & RESULTS
    HIGHLY INFLUENCED

    A Study of Black Box Adversarial Attacks in Computer Vision

    VIEW 7 EXCERPTS
    CITES BACKGROUND
    HIGHLY INFLUENCED

    APRICOT: A Dataset of Physical Adversarial Attacks on Object Detection

    VIEW 5 EXCERPTS
    CITES METHODS & BACKGROUND
    HIGHLY INFLUENCED

    Enhancing the Robustness of Deep Neural Networks by Boundary Conditional GAN

    VIEW 9 EXCERPTS
    CITES BACKGROUND, METHODS & RESULTS
    HIGHLY INFLUENCED

    HAD-GAN: A Human-perception Auxiliary Defense GAN model to Defend Adversarial Examples

    VIEW 3 EXCERPTS
    CITES METHODS
    HIGHLY INFLUENCED

    Hilbert-Based Generative Defense for Adversarial Examples

    VIEW 8 EXCERPTS
    CITES BACKGROUND
    HIGHLY INFLUENCED

    FILTER CITATIONS BY YEAR

    2017
    2020

    CITATION STATISTICS

    • 38 Highly Influenced Citations

    • Averaged 90 Citations per year from 2017 through 2019

    • 95% Increase in citations per year in 2019 over 2018

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 22 REFERENCES

    Towards Evaluating the Robustness of Neural Networks

    VIEW 7 EXCERPTS
    HIGHLY INFLUENTIAL

    Improved Training of Wasserstein GANs

    VIEW 5 EXCERPTS
    HIGHLY INFLUENTIAL

    MagNet: A Two-Pronged Defense against Adversarial Examples

    VIEW 11 EXCERPTS
    HIGHLY INFLUENTIAL

    Practical Black-Box Attacks against Machine Learning

    VIEW 7 EXCERPTS
    HIGHLY INFLUENTIAL

    Generative Adversarial Nets

    VIEW 8 EXCERPTS
    HIGHLY INFLUENTIAL

    Explaining and Harnessing Adversarial Examples

    VIEW 11 EXCERPTS
    HIGHLY INFLUENTIAL

    Cleverhans v1. 0.0: an adversarial machine learning library

    • Nicolas Papernot, Ian Goodfellow, Ryan Sheatsley, Reuben Feinman, Patrick McDaniel
    • arXiv preprint arXiv:1610.00768,
    • 2016
    VIEW 7 EXCERPTS
    HIGHLY INFLUENTIAL

    Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks

    VIEW 7 EXCERPTS
    HIGHLY INFLUENTIAL