• Corpus ID: 239009707

Adversarial Attacks on ML Defense Models Competition

@article{Dong2021AdversarialAO,
  title={Adversarial Attacks on ML Defense Models Competition},
  author={Yinpeng Dong and Qi-An Fu and Xiao Yang and Wenzhao Xiang and Tianyu Pang and Hang Su and Jun Zhu and Jiayu Tang and Yuefeng Chen and Xiaofeng Mao and Yuan He and Hui Xue and Chao Li and Ye Liu and Qilong Zhang and Lianli Gao and Yunrui Yu and Xitong Gao and Zhe Zhao and Daquan Lin and Jiadong Lin and Chuanbiao Song and Zihao Wang and Zhennan Wu and Yang Guo and Jiequan Cui and Xiaogang Xu and Pengguang Chen},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.08042}
}
  • Yinpeng Dong, Qi-An Fu, +25 authors Pengguang Chen
  • Published 15 October 2021
  • Computer Science
  • ArXiv
Due to the vulnerability of deep neural networks (DNNs) to adversarial examples, a large number of defense techniques have been proposed to alleviate this problem in recent years. However, the progress of building more robust models is usually hampered by the incomplete or incorrect robustness evaluation. To accelerate the research on reliable evaluation of adversarial robustness of the current defense models in image classification, the TSAIL group at Tsinghua University and the Alibaba… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 57 REFERENCES
Benchmarking Adversarial Robustness on Image Classification
  • Yinpeng Dong, Qi-An Fu, +4 authors Jun Zhu
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
A comprehensive, rigorous, and coherent benchmark to evaluate adversarial robustness on image classification tasks is established and several important findings are drawn that can provide insights for future research.
Boosting Adversarial Attacks with Momentum
TLDR
A broad class of momentum-based iterative algorithms to boost adversarial attacks by integrating the momentum term into the iterative process for attacks, which can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples.
Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser
TLDR
High-level representation guided denoiser (HGD) is proposed as a defense for image classification by using a loss function defined as the difference between the target model's outputs activated by the clean image and denoised image.
Attack as defense: characterizing adversarial examples using robustness
TLDR
This work proposes a novel defense framework, named attack as defense (A2D), to detect adversarial examples by effectively evaluating an example’s robustness, and shows that A2D is more effective than recent promising approaches.
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks
  • Jianyu Wang
  • Computer Science
    2019 IEEE/CVF International Conference on Computer Vision (ICCV)
  • 2019
TLDR
The experiment on the very (computationally) challenging ImageNet dataset further demonstrates the effectiveness of the fast method, which shows that random start and the most confusing target attack effectively prevent the label leaking and gradient masking problem.
Towards Evaluating the Robustness of Neural Networks
TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.
Feature Denoising for Improving Adversarial Robustness
TLDR
It is suggested that adversarial perturbations on images lead to noise in the features constructed by these networks, and new network architectures are developed that increase adversarial robustness by performing feature denoising.
Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks
TLDR
A translation-invariant attack method to generate more transferable adversarial examples against the defense models, which fools eight state-of-the-art defenses at an 82% success rate on average based only on the transferability, demonstrating the insecurity of the current defense techniques.
Towards Robust Detection of Adversarial Examples
TLDR
This paper presents a novel training procedure and a thresholding test strategy, towards robust detection of adversarial examples, and proposes to minimize the reverse cross-entropy (RCE), which encourages a deep network to learn latent representations that better distinguish adversarialExamples from normal ones.
...
1
2
3
4
5
...