Robust and Accurate Object Detection via Adversarial Learning

@article{Chen2021RobustAA,
  title={Robust and Accurate Object Detection via Adversarial Learning},
  author={Xiangning Chen and Cihang Xie and Mingxing Tan and Li Zhang and Cho-Jui Hsieh and Boqing Gong},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={16617-16626}
}
Data augmentation has become a de facto component for training high-performance deep image classifiers, but its potential is under-explored for object detection. Noting that most state-of-the-art object detectors benefit from fine-tuning a pre-trained classifier, we first study how the classifiers’ gains from various data augmentations transfer to object detection. The results are discouraging; the gains diminish after fine-tuning in terms of either accuracy or robustness. This work instead… 

Adv-Cut Paste: Semantic adversarial class specific data augmentation technique for object detection

This work introduces a framework to generate harder examples for a specific object class and an adversarial attack for the object detection task and demonstrates a substantial improvement in average precision (AP) for a single class of the COCO dataset.

Robust and Accurate Object Detection via Self-Knowledge Distillation

UDFA can surpass the standard training and state-of-the-art adversarial training methods for object detection and explore the self-knowledge distillation from a new angle by decoupling original branch into a self-supervised learning branch and a new self- knowledge distillation branch.

Adversarially Trained Object Detector for Unsupervised Domain Adaptation

This study establishes that adversarially trained detectors achieve improved detection performance in target domains that are significantly shifted from source domains, and proposes a method that combines adversarial training and feature alignment to ensure the improved alignment of robust features with the target domain.

Enhance the Visual Representation via Discrete Adversarial Training

Discrete Adversarial Training (DAT) is proposed, a plug-and-play technique for enhancing the visual representation that achieves significant improvement on multiple tasks including image classification, object detection and self-supervised learning.

Adversarial Attack and Defense of YOLO Detectors in Autonomous Driving Scenarios

This paper identifies a serious objectness-related adversarial vulnerability in YOLO detectors and presents an effective attack strategy targeting the objectness aspect of visual detection in autonomous vehicles and proposes a new objecthood-aware adversarial training approach for visual detection.

Towards Domain Generalization in Object Detection

This paper formulate the DGOD problem and propose a comprehensive evaluation benchmark and proposes a novel method named Region Aware Proposal reweighTing (RAPT) to eliminate dependence within RoI features and demonstrates that this method outperforms other state-of-the-art counterparts.

Advancing Deep Metric Learning Through Multiple Batch Norms And Multi-Targeted Adversarial Examples

MDProp is proposed, a framework to simultaneously improve the performance of DML models on clean data and inputs following multiple distributions and generate feature space multi-targeted AXs to perform targeted regularization on the training model’s denser embedding space regions, resulting in improved embedded space densities contributing to the improved generalization in the trained models.

Fast AdvProp

Fast AdvProp is introduced, which aggressively revamps AdvProp’s costly training components, rendering the method nearly as cheap as the vanilla training, and is able to further model performance on a spectrum of visual benchmarks, without incurring extra training cost.

Advanced Data Augmentation Approaches: A Comprehensive Survey and Future directions

A novel and comprehensive taxonomy of reviewed data augmentation techniques, and the strengths and weaknesses (wherever possible) of each technique are provided, and comprehensive results of the data augmented effect on three popular computer vision tasks are provided.

When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations

By promoting smoothness with a recently proposed sharpness-aware optimizer, this paper substantially improve the accuracy and robustness of ViTs and MLP-Mixers on various tasks spanning supervised, adversarial, contrastive, and transfer learning.

References

SHOWING 1-10 OF 43 REFERENCES

Learning Data Augmentation Strategies for Object Detection

This work investigates how learned, specialized data augmentation policies improve generalization performance for detection models, and reveals that a learned augmentation policy is superior to state-of-the-art architecture regularization methods for object detection, even when considering strong baselines.

Towards Adversarially Robust Object Detection

  • Haichao ZhangJianyu Wang
  • Computer Science
    2019 IEEE/CVF International Conference on Computer Vision (ICCV)
  • 2019
This work revisits and systematically analyze object detectors and many recently developed attacks from the perspective of model robustness and develops an adversarial training approach which can leverage the multiple sources of attacks for improving the robustness of detection models.

Adversarial Examples for Semantic Segmentation and Object Detection

This paper proposes a novel algorithm named Dense Adversary Generation (DAG), which applies to the state-of-the-art networks for segmentation and detection, and finds that the adversarial perturbations can be transferred across networks with different training data, based on different architectures, and even for different recognition tasks.

Robust Physical Adversarial Attack on Faster R-CNN Object Detector

This work can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.

Adversarial AutoAugment

An adversarial method to arrive at a computationally-affordable solution called Adversarial AutoAugment, which can simultaneously optimize target related object and augmentation policy search loss and demonstrate significant performance improvements over state-of-the-art.

Adversarial Examples Improve Image Recognition

This work proposes AdvProp, an enhanced adversarial training scheme which treats adversarial examples as additional examples, to prevent overfitting, and shows that AdvProp improves a wide range of models on various image recognition tasks and performs better when the models are bigger.

Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming

It is shown that a range of standard object detection models suffer a severe performance loss on corrupted images (down to 30--60\% of the original performance), however, a simple data augmentation trick---stylizing the training images---leads to a substantial increase in robustness across corruption type, severity and dataset.

Focal Loss for Dense Object Detection

This paper proposes to address the extreme foreground-background class imbalance encountered during training of dense detectors by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples, and develops a novel Focal Loss, which focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training.

AutoAugment: Learning Augmentation Strategies From Data

This paper describes a simple procedure called AutoAugment to automatically search for improved data augmentation policies, which achieves state-of-the-art accuracy on CIFAR-10, CIFar-100, SVHN, and ImageNet (without additional data).

Adversarial Machine Learning at Scale

This research applies adversarial training to ImageNet and finds that single-step attacks are the best for mounting black-box attacks, and resolution of a "label leaking" effect that causes adversarially trained models to perform better on adversarial examples than on clean examples.