Corpus ID: 52285766

Robust Adversarial Perturbation on Deep Proposal-based Models

@article{Li2018RobustAP,
  title={Robust Adversarial Perturbation on Deep Proposal-based Models},
  author={Yuezun Li and Daniel Tian and Ming-Ching Chang and Xiao Bian and Siwei Lyu},
  journal={ArXiv},
  year={2018},
  volume={abs/1809.05962}
}
Adversarial noises are useful tools to probe the weakness of deep learning based computer vision algorithms. In this paper, we describe a robust adversarial perturbation (R-AP) method to attack deep proposal-based object detectors and instance segmentation algorithms. Our method focuses on attacking the common component in these algorithms, namely Region Proposal Network (RPN), to universally degrade their performance in a black-box fashion. To do so, we design a loss function that combines a… Expand
Contextual Adversarial Attacks For Object Detection
The recent advances in adversarial attack techniques have witnessed the success of attacking high-quality CNN-based object detectors. However, in literature, the adversarial attack algorithms onExpand
Fast Local Attack: Generating Local Adversarial Examples for Object Detectors
TLDR
This work leverages higher-level semantic information to generate high aggressive local perturbations for anchor-free object detectors, which achieves a higher black-box attack as well as transferring attack performance. Expand
Transferable Adversarial Attacks for Image and Video Object Detection
TLDR
The proposed method is based on the Generative Adversarial Network (GAN) framework, where it combines the high-level class loss and low-level feature loss to jointly train the adversarial example generator, and can efficiently generate image and video adversarial examples that have better transferability. Expand
MI-FGSM on Faster R-CNN Object Detector
The adversarial examples show the vulnerability of deep neural networks, which makes adversarial attacks widely concerned. However, most of the attack methods are based on image classification model.Expand
G-UAP: Generic Universal Adversarial Perturbation that Fools RPN-based Detectors
TLDR
This paper presents a novel and effective approach called G-UAP to craft universal adversarial perturbations, which can explicitly degrade the detection accuracy of a detector on a wide range of image samples. Expand
Using Feature Alignment Can Improve Clean Average Precision and Adversarial Robustness in Object Detection
TLDR
The detector's clean AP and robustness can be improved by aligning the features of the middle layer of the network, and two feature alignment methods are proposed, namely Knowledge-Distilled Feature Alignment (KDFA) and Self-Supervised Feature Aligned (SSFA). Expand
Category-wise Attack: Transferable Adversarial Examples for Anchor Free Object Detection
TLDR
Surprisingly, the generated adversarial examples it not only able to effectively attack the targeted anchor-free object detector but also to be transferred to attack other object detectors, even anchor-based detectors such as Faster R-CNN. Expand
Class-Aware Robust Adversarial Training for Object Detection
TLDR
A novel class-aware robust adversarial training paradigm for the object detection task that effectively and evenly improves the adversarial robustness of trained models for all the object classes as compared with the previous defense methods. Expand
Towards Adversarially Robust Object Detection
  • Haichao Zhang, Jianyu Wang
  • Computer Science, Engineering
  • 2019 IEEE/CVF International Conference on Computer Vision (ICCV)
  • 2019
TLDR
This work revisits and systematically analyze object detectors and many recently developed attacks from the perspective of model robustness and develops an adversarial training approach which can leverage the multiple sources of attacks for improving the robustness of detection models. Expand
Adversarial Attacks for Object Detection
TLDR
This paper comprehensively analyze several popular adversarial attacks for object detection, mainly with Faster R-CNN and YOLO models and investigates the resistance of these adversaries against some common defense methods. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 25 REFERENCES
Adversarial Attacks Beyond the Image Space
TLDR
Though image-space adversaries can be interpreted as per-pixel albedo change, it is verified that they cannot be well explained along these physically meaningful dimensions, which often have a non-local effect. Expand
Adversarial Examples for Semantic Segmentation and Object Detection
TLDR
This paper proposes a novel algorithm named Dense Adversary Generation (DAG), which applies to the state-of-the-art networks for segmentation and detection, and finds that the adversarial perturbations can be transferred across networks with different training data, based on different architectures, and even for different recognition tasks. Expand
Universal Adversarial Perturbations
TLDR
The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers and outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images. Expand
Robust Physical-World Attacks on Deep Learning Models
TLDR
This work proposes a general attack algorithm,Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Expand
The Limitations of Deep Learning in Adversarial Settings
TLDR
This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. Expand
DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks
TLDR
The DeepFool algorithm is proposed to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers, and outperforms recent methods in the task of computing adversarial perturbation and making classifiers more robust. Expand
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Expand
Adversarial examples in the physical world
TLDR
It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples. Expand
R-FCN: Object Detection via Region-based Fully Convolutional Networks
TLDR
This work presents region-based, fully convolutional networks for accurate and efficient object detection, and proposes position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection. Expand
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
TLDR
This work introduces a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals and further merge RPN and Fast R-CNN into a single network by sharing their convolutionAL features. Expand
...
1
2
3
...