Corpus ID: 199472952

Random Directional Attack for Fooling Deep Neural Networks

@article{Luo2019RandomDA,
  title={Random Directional Attack for Fooling Deep Neural Networks},
  author={W. Luo and C. Wu and N. Zhou and L. Ni},
  journal={ArXiv},
  year={2019},
  volume={abs/1908.02658}
}
  • W. Luo, C. Wu, +1 author L. Ni
  • Published 2019
  • Computer Science
  • ArXiv
  • Deep neural networks (DNNs) have been widely used in many fields such as images processing, speech recognition; however, they are vulnerable to adversarial examples, and this is a security issue worthy of attention. [...] Key Method Rather than limiting the gradient direction to generate an attack, RDA searches the attack direction based on hill climbing and uses multiple strategies to avoid local optima that cause attack failure. Compared with state-of-the-art gradient-based methods, the attack performance of…Expand Abstract

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 52 REFERENCES
    Detecting Adversarial Examples in Deep Networks with Adaptive Noise Reduction
    52
    One Pixel Attack for Fooling Deep Neural Networks
    642
    Towards Deep Neural Network Architectures Robust to Adversarial Examples
    442
    Divide, Denoise, and Defend against Adversarial Attacks
    17
    Detecting Adversarial Samples from Artifacts
    364
    EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
    231
    Towards Evaluating the Robustness of Neural Networks
    2429
    Boosting Adversarial Attacks with Momentum
    461
    Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
    1341