NAG: Network for Adversary Generation

@article{Mopuri2018NAGNF,
  title={NAG: Network for Adversary Generation},
  author={Konda Reddy Mopuri and Utkarsh Ojha and U. Garg and R. Venkatesh Babu},
  journal={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2018},
  pages={742-751}
}
  • Konda Reddy Mopuri, Utkarsh Ojha, +1 author R. Venkatesh Babu
  • Published 2018
  • Computer Science
  • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • Adversarial perturbations can pose a serious threat for deploying machine learning systems. Recent works have shown existence of image-agnostic perturbations that can fool classifiers over most natural images. Existing methods present optimization approaches that solve for a fooling objective with an imperceptibility constraint to craft the perturbations. However, for a given classifier, they generate one perturbation at a time, which is a single instance from the manifold of adversarial… CONTINUE READING
    43 Citations

    Figures, Tables, and Topics from this paper.

    Transferable Universal Adversarial Perturbations Using Generative Models
    Universal Adversarial Perturbations: A Survey
    • 2
    • Highly Influenced
    • PDF
    Generalizable Data-Free Objective for Crafting Universal Adversarial Perturbations
    • 55
    • PDF
    GAP++: Learning to generate target-conditioned adversarial examples
    • 1
    • PDF
    Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers
    • 20
    • PDF
    Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks
    • 10
    • Highly Influenced
    • PDF
    Universal Adversarial Training
    • 31
    • PDF
    Adversarial Defense via Learning to Generate Diverse Attacks
    • 11
    • PDF
    One Sparse Perturbation to Fool them All, almost Always!

    References

    SHOWING 1-10 OF 41 REFERENCES
    Generalizable Data-Free Objective for Crafting Universal Adversarial Perturbations
    • 55
    • PDF
    Fast Feature Fool: A data independent approach to universal adversarial perturbations
    • 91
    • PDF
    Ensemble Adversarial Training: Attacks and Defenses
    • 1,026
    • Highly Influential
    • PDF
    Universal Adversarial Perturbations
    • 973
    • Highly Influential
    • PDF
    Adversarial Machine Learning at Scale
    • 1,143
    • PDF
    Delving into Transferable Adversarial Examples and Black-box Attacks
    • 726
    • PDF
    Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
    • 1,436
    • PDF
    Towards Evaluating the Robustness of Neural Networks
    • 2,723
    • PDF