• Corpus ID: 243848218

Natural Adversarial Objects

@article{Lau2021NaturalAO,
  title={Natural Adversarial Objects},
  author={Felix Lau and Nishant Subramani and Sasha Harrison and Aerin Kim and Elliot Branson and Rosanne Liu},
  journal={ArXiv},
  year={2021},
  volume={abs/2111.04204}
}
Although state-of-the-art object detection methods have shown compelling performance, models often are not robust to adversarial attacks and out-of-distribution data. We introduce a new dataset, Natural Adversarial Objects (NAO), to evaluate the robustness of object detection models. NAO contains 7,934 images and 9,943 objects that are unmodified and representative of real-world scenarios, but cause state-of-the-art detection models to misclassify with high confidence. The mean average… 

Trapped in texture bias? A large scale comparison of deep instance segmentation

YOLACT++, SOTR and SOLOv2 are significantly more robust to out-of-distribution texture than other frameworks and it is shown that deeper and dynamic architectures improve robustness whereas training schedules, data augmentation and pre-training have only a minor impact.

Evaluating Out-of-Distribution Performance on Document Image Classifiers

This paper curate and release a new out-of-distribution benchmark for evaluating out- of-dist distribution performance for document classifiers, and provides researchers with a valuable new resource for analyzing out- Of- Distribution performance on document classIflers.

References

SHOWING 1-10 OF 31 REFERENCES

Natural Adversarial Examples

This work introduces two challenging datasets that reliably cause machine learning model performance to substantially degrade and curates an adversarial out-of-distribution detection dataset called IMAGENET-O, which is the first out- of-dist distribution detection dataset created for ImageNet models.

PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples

Adversarial perturbations of normal images are usually imperceptible to humans, but they can seriously confuse state-of-the-art machine learning models. What makes them so special in the eyes of

Measuring the tendency of CNNs to Learn Surface Statistical Regularities

Deep CNNs are known to exhibit the following peculiarity: on the one hand they generalize extremely well to a test set, while on the other hand they are extremely sensitive to so-called adversarial

Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models

The proposed Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against adversarial perturbations, is empirically shown to be consistently effective against different attack methods and improves on existing defense strategies.

Do ImageNet Classifiers Generalize to ImageNet?

The results suggest that the accuracy drops are not caused by adaptivity, but by the models' inability to generalize to slightly "harder" images than those found in the original test sets.

Towards Evaluating the Robustness of Neural Networks

It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.

Explaining and Harnessing Adversarial Examples

It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.

Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet

A high-performance DNN architecture on ImageNet whose decisions are considerably easier to explain is introduced, and behaves similar to state-of-the art deep neural networks such as VGG-16, ResNet-152 or DenseNet-169 in terms of feature sensitivity, error distribution and interactions between image parts.

Ensemble Adversarial Training: Attacks and Defenses

This work finds that adversarial training remains vulnerable to black-box attacks, where perturbations computed on undefended models are transferred to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step.

Stochastic Activation Pruning for Robust Adversarial Defense

Stochastic Activation Pruning (SAP) is proposed, a mixed strategy for adversarial defense that prunes a random subset of activations (preferentially pruning those with smaller magnitude) and scales up the survivors to compensate.