Corpus ID: 211069303

ABBA: Saliency-Regularized Motion-Based Adversarial Blur Attack

  title={ABBA: Saliency-Regularized Motion-Based Adversarial Blur Attack},
  author={Qing Guo and Felix Juefei-Xu and Xiaofei Xie and Lei Ma and Jia-xiang Wang and Wei Feng and Yang Liu},
  • Qing Guo, Felix Juefei-Xu, +4 authors Yang Liu
  • Published in ArXiv 2020
  • Computer Science
  • Deep neural networks are vulnerable to noise-based adversarial examples, which can mislead the networks by adding random-like noise. However, such examples are hardly found in the real world and easily perceived when thumping noises are used to keep their high transferability across different models. In this paper, we identify a new attacking method termed motion-based adversarial blur attack (ABBA) that can generate visually natural motion-blurred adversarial examples even with relatively high… CONTINUE READING

    Create an AI-powered research feed to stay up to date with new papers like this posted to ArXiv


    Publications referenced by this paper.

    2019b. Diffchaser: Detecting disagreements for deep neural networks

    • Xie et al, 2019b Xie, +4 authors X. Li
    • In IJCAI
    • 2019

    Towards characterizing adversarial defects of deep learning software from the lens of uncertainty

    • Zhang et al, 2020 Zhang, +6 authors M. Sun
    • In ICSE
    • 2020

    A Simple Pooling-Based Design for Real-Time Salient Object Detection

    Amora: Black-box Adversarial Morphing Attack

    Brain-inspired reverse adversarial examples

    DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better