Art of Singular Vectors and Universal Adversarial Perturbations

@article{Khrulkov2018ArtOS,
  title={Art of Singular Vectors and Universal Adversarial Perturbations},
  author={Valentin Khrulkov and I. Oseledets},
  journal={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2018},
  pages={8562-8570}
}
Vulnerability of Deep Neural Networks (DNNs) to adversarial attacks has been attracting a lot of attention in recent studies. It has been shown that for many state of the art DNNs performing image classification there exist universal adversarial perturbations - image-agnostic perturbations mere addition of which to natural images with high probability leads to their misclassification. In this work we propose a new algorithm for constructing such universal perturbations. Our approach is based on… 

Figures and Tables from this paper

A Method for Computing Class-wise Universal Adversarial Perturbations
TLDR
An algorithm for computing class-specific universal adversarial perturbations for deep neural networks that employs a perturbation that is a linear function of weights of the neural network and hence can be computed much faster.
Universal Adversarial Perturbations: A Survey
TLDR
This paper attempts to provide a detailed discussion on the various data-driven and data-independent methods for generating universal perturbations, along with measures to defend against such perturbation in various deep learning tasks.
Understanding Adversarial Examples From the Mutual Influence of Images and Perturbations
TLDR
This work uses the DNN logits as a vector for feature representation, and utilizes this vector representation to understand adversarial examples by disentangling the clean images and adversarial perturbations, and analyze their influence on each other.
Universal Adversarial Perturbation Generated by Attacking Layer-wise Relevance Propagation
TLDR
This approach is the first to generate universal perturbations by attacking the attention heat maps with the interpretation method, Layer-wise Relevance Propagation, and achieves high fooling ratios on image classification DNNs pre-trained by ImageNet dataset.
Geometry-Inspired Top-k Adversarial Perturbations
TLDR
Top-k Universal Adversarial Perturbations, image-agnostic tiny perturbations that cause the true class to be absent among the Top-k prediction for the majority of natural images is proposed.
Double Targeted Universal Adversarial Perturbations
TLDR
A double targeted universal adversarial perturbations (DT-UAPs) are introduced to bridge the gap between the instance-discriminative image-dependent perturbATIONS and the generic universal perturbation to provide an attacker with the freedom to perform precise attacks on a DNN model while raising little suspicion.
On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions
  • Yusuke Tsuzuku, Issei Sato
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
The primal finding is that convolutional networks are sensitive to the directions of Fourier basis functions, and an algorithm is proposed to create shift-invariant universal adversarial perturbations available in black-box settings.
Adversarial Examples on Object Recognition: A Comprehensive Survey
TLDR
The hypotheses behind their existence, the methods used to construct or protect against them, and the capacity to transfer adversarial examples between different machine learning models are introduced to provide a comprehensive and self-contained survey of this growing field of research.
Adversarial Examples on Object Recognition
TLDR
The hypotheses behind their existence, the methods used to construct or protect against them, and the capacity to transfer adversarial examples between different machine learning models are introduced.
Defending Against Universal Perturbations With Shared Adversarial Training
TLDR
This work shows that adversarial training is more effective in preventing universal perturbations, where the same perturbation needs to fool a classifier on many inputs, and investigates the trade-off between robustness against universal perturbed data and performance on unperturbed data.
...
...

References

SHOWING 1-10 OF 33 REFERENCES
Universal Adversarial Perturbations
TLDR
The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers and outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images.
Adversarial Perturbations Against Deep Neural Networks for Malware Classification
TLDR
This paper shows how to construct highly-effective adversarial sample crafting attacks for neural networks used as malware classifiers, and evaluates to which extent potential defensive mechanisms against adversarial crafting can be leveraged to the setting of malware classification.
DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks
TLDR
The DeepFool algorithm is proposed to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers, and outperforms recent methods in the task of computing adversarial perturbation and making classifiers more robust.
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
Parseval Networks: Improving Robustness to Adversarial Examples
TLDR
It is shown that Parseval networks match the state-of-the-art in terms of accuracy on CIFAR-10/100 and Street View House Numbers while being more robust than their vanilla counterpart against adversarial examples.
Robustness of classifiers: from adversarial to random noise
TLDR
This paper proposes the first quantitative analysis of the robustness of nonlinear classifiers in this general noise regime, and establishes precise theoretical bounds on the robustity of classifier's decision boundary, which depend on the curvature of the classifiers' decision boundary.
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
TLDR
The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN, and analytically investigates the generalizability and robustness properties granted by the use of defensive Distillation when training DNNs.
Delving into Transferable Adversarial Examples and Black-box Attacks
TLDR
This work is the first to conduct an extensive study of the transferability over large models and a large scale dataset, and it is also theFirst to study the transferabilities of targeted adversarial examples with their target labels.
Adversarial examples in the physical world
TLDR
It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples.
Intriguing properties of neural networks
TLDR
It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.
...
...