Corpus ID: 226289916

Efficient and Transferable Adversarial Examples from Bayesian Neural Networks

@article{Gubri2020EfficientAT,
  title={Efficient and Transferable Adversarial Examples from Bayesian Neural Networks},
  author={Martin Gubri and Maxime Cordy and Mike Papadakis and Y. L. Traon},
  journal={ArXiv},
  year={2020},
  volume={abs/2011.05074}
}
Deep neural networks are vulnerable to evasion attacks, i.e., carefully crafted examples designed to fool a model at test time. Attacks that successfully evade an ensemble of models can transfer to other independently trained models, which proves useful in black-box settings. Unfortunately, these methods involve heavy computation costs to train the models forming the ensemble. To overcome this, we propose a new method to generate transferable adversarial examples efficiently. Inspired by… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 43 REFERENCES
Learning Transferable Adversarial Examples via Ghost Networks
TLDR
Ghost Networks is proposed to improve the transferability of adversarial examples by reproducing the NeurIPS 2017 adversarial competition, and outperforms the No.1 attack submission by a large margin, demonstrating its effectiveness and efficiency. Expand
A New Ensemble Adversarial Attack Powered by Long-term Gradient Memories
TLDR
Experiments indicate that the proposed novel black-box attack, dubbed Serial-Mini-BatchEnsemble-Attack (SMBEA), outperforms state-ofthe-art ensemble attacks over multiple pixel-to-pixel vision tasks including image translation and salient region prediction. Expand
Delving into Transferable Adversarial Examples and Black-box Attacks
TLDR
This work is the first to conduct an extensive study of the transferability over large models and a large scale dataset, and it is also theFirst to study the transferabilities of targeted adversarial examples with their target labels. Expand
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee. Expand
Adversarial Distillation of Bayesian Neural Network Posteriors
TLDR
These are the first results applying MCMC-based BNNs to the aforementioned downstream applications, and by construction, the framework not only distills the Bayesian predictive distribution, but the posterior itself, which allows one to compute quantities such as the approximate model variance, which is useful in downstream tasks. Expand
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
TLDR
Two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function are proposed and combined with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness. Expand
Adversarial Robustness Toolbox v1.0.0
Adversarial Robustness Toolbox (ART) is a Python library supporting developers and researchers in defending Machine Learning models (Deep Neural Networks, Gradient Boosted Decision Trees, SupportExpand
Adversarial examples in the physical world
TLDR
It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples. Expand
The Limitations of Model Uncertainty in Adversarial Settings
TLDR
It is concluded that uncertainty and confidence, even in the Bayesian sense, can be circumvented by both white-box and black-box attackers. Expand
DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks
TLDR
The DeepFool algorithm is proposed to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers, and outperforms recent methods in the task of computing adversarial perturbation and making classifiers more robust. Expand
...
1
2
3
4
5
...