• Corpus ID: 235390889

Deep Neural Network Loses Attention to Adversarial Images

@article{Kotyan2021DeepNN,
  title={Deep Neural Network Loses Attention to Adversarial Images},
  author={Shashank Kotyan and Danilo Vasconcellos Vargas},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.05657}
}
Adversarial algorithms have shown to be effective against neural networks for a variety of tasks. Some adversarial algorithms perturb all the pixels in the image minimally for the image classification task in image classification. In contrast, some algorithms perturb few pixels strongly. However, very little information is available regarding why these adversarial samples so diverse from each other exist. Recently, Vargas et al. showed that the existence of these adversarial samples might be… 

References

SHOWING 1-10 OF 23 REFERENCES

Understanding the One Pixel Attack: Propagation Maps and Locality Analysis

Propagation Maps reveal that even in extremely deep networks such as Resnet, modification in one pixel easily propagates until the last layer and this initial local perturbation is also shown to spread becoming a global one and reaching absolute difference values that are close to the maximum value of the original feature maps in a given layer.

Towards Deep Learning Models Resistant to Adversarial Attacks

This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.

Explaining and Harnessing Adversarial Examples

It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.

Empirical Study of the Topology and Geometry of Deep Networks

It is shown that state-of-the-art deep nets learn connected classification regions, and that the decision boundary in the vicinity of datapoints is flat along most directions, and an essential connection is drawn between two seemingly unrelated properties of deep networks: their sensitivity to additive perturbations of the inputs, and the curvature of their decision boundary.

Ensemble Adversarial Training: Attacks and Defenses

This work finds that adversarial training remains vulnerable to black-box attacks, where perturbations computed on undefended models are transferred to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step.

Adversarial Patch

A method to create universal, robust, targeted adversarial image patches in the real world, which can be printed, added to any scene, photographed, and presented to image classifiers; even when the patches are small, they cause the classifiers to ignore the other items in the scene and report a chosen target class.

Deep neural networks are easily fooled: High confidence predictions for unrecognizable images

This work takes convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and finds images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class, and produces fooling images, which are then used to raise questions about the generality of DNN computer vision.

Striving for Simplicity: The All Convolutional Net

It is found that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.