Understanding Adversarial Examples From the Mutual Influence of Images and Perturbations

@article{Zhang2020UnderstandingAE,
  title={Understanding Adversarial Examples From the Mutual Influence of Images and Perturbations},
  author={C. Zhang and Philipp Benz and T. Imtiaz and In-So Kweon},
  journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2020},
  pages={14509-14518}
}
  • C. Zhang, Philipp Benz, +1 author In-So Kweon
  • Published 2020
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • A wide variety of works have explored the reason for the existence of adversarial examples, but there is no consensus on the explanation. We propose to treat the DNN logits as a vector for feature representation, and exploit them to analyze the mutual influence of two independent inputs based on the Pearson correlation coefficient (PCC). We utilize this vector representation to understand adversarial examples by disentangling the clean images and adversarial perturbations, and analyze their… CONTINUE READING
    8 Citations

    Figures, Tables, and Topics from this paper

    Double Targeted Universal Adversarial Perturbations
    • 4
    • PDF
    Batch Normalization Increases Adversarial Vulnerability: Disentangling Usefulness and Robustness of Model Features
    • 3
    • PDF
    Robustness May Be at Odds with Fairness: An Empirical Study on Class-wise Accuracy
    • 1
    • PDF
    On Success and Simplicity: A Second Look at Transferable Targeted Attacks
    • PDF
    Revisiting Batch Normalization for Improving Corruption Robustness
    • 3
    • PDF
    ResNet or DenseNet? Introducing Dense Shortcuts to ResNet
    • 3
    • PDF
    Data from Model: Extracting Data from Non-robust and Robust Models
    • 4
    • PDF

    References

    SHOWING 1-10 OF 50 REFERENCES
    Understanding and Enhancing the Transferability of Adversarial Examples
    • 32
    • PDF
    Art of Singular Vectors and Universal Adversarial Perturbations
    • 47
    • PDF
    With Friends Like These, Who Needs Adversaries?
    • 33
    • Highly Influential
    • PDF
    Cross-Domain Transferability of Adversarial Perturbations
    • 20
    • PDF
    Adversarial Examples Are a Natural Consequence of Test Error in Noise
    • 131
    • PDF
    Fast Feature Fool: A data independent approach to universal adversarial perturbations
    • 101
    • Highly Influential
    • PDF
    Generalizable Data-Free Objective for Crafting Universal Adversarial Perturbations
    • 66
    • PDF
    Exploring the space of adversarial images
    • Pedro Tabacof, E. Valle
    • Computer Science
    • 2016 International Joint Conference on Neural Networks (IJCNN)
    • 2016
    • 126
    • PDF
    Generative Adversarial Perturbations
    • 98
    • Highly Influential
    • PDF
    Universal Adversarial Perturbations Against Semantic Image Segmentation
    • 136
    • PDF