Corpus ID: 225039996

Learning Black-Box Attackers with Transferable Priors and Query Feedback

@article{Yang2020LearningBA,
  title={Learning Black-Box Attackers with Transferable Priors and Query Feedback},
  author={Jiancheng Yang and Yangzhou Jiang and Xiaoyang Huang and Bingbing Ni and Chenglong Zhao},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.11742}
}
This paper addresses the challenging black-box adversarial attack problem, where only classification confidence of a victim model is available. Inspired by consistency of visual saliency between different vision models, a surrogate model is expected to improve the attack performance via transferability. By combining transferability-based and query-based black-box attack, we propose a surprisingly simple baseline approach (named SimBA++) using the surrogate model, which significantly outperforms… Expand

Tables from this paper

QueryNet: An Efficient Attack Framework with Surrogates Carrying Multiple Identities
TLDR
QueryNet is developed, an efficient attack network that can significantly reduce queries and exploit surrogates’ parameters and also their architectures, enhancing both the GS and the FS, and has significant query-efficiency. Expand
CG-ATTACK: Modeling the Conditional Distribution of Adversarial Perturbations to Boost Black-Box Attack.
TLDR
This work proposes a novel score-based black-box adversarial attack method by designing a novel transfer mechanism based on the c-Glow model pretrained with the above efficient training method on surrogate models, to take advantage of both the adversarial transferability and queries to the target model. Expand
QueryNet: An Attack Framework with Surrogates Carrying Multiple Identities
  • Sizhe Chen, Zhehao Huang, Qinghua Tao, Xiaolin Huang
  • Computer Science
  • 2021
Deep Neural Networks (DNNs) are acknowledged as vulnerable to adversarial attacks, while the existing black-box attacks require extensive queries on the victim DNN to achieve high success rates. ForExpand
Advances in adversarial attacks and defenses in computer vision: A survey
Deep Learning (DL) is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neuralExpand
Threat of Adversarial Attacks on Deep Learning in Computer Vision: Survey II
TLDR
A literature review of the contributions made by the computer vision community in adversarial attacks on deep learning until the advent of year 2018, which focuses on the advances in this area since 2018. Expand
Measuring `∞ Attacks by the `2 Norm
Deep Neural Networks (DNNs) could be easily fooled by Adversarial Examples (AEs) with the imperceptible difference to original samples in human eyes. To keep the difference imperceptible, theExpand
Going Far Boosts Attack Transferability, but Do Not Do It
TLDR
This paper investigates the impacts of optimization on attack transferability by comprehensive experiments concerning 7 optimization algorithms, 4 surrogates, and 9 black-box models and surprisingly finds that the varied transferability of AEs from optimization algorithms is strongly related to the corresponding Root Mean Square Error (RMSE) from their original samples. Expand
Adversarial for Good? How the Adversarial ML Community's Values Impede Socially Beneficial Uses of Attacks
TLDR
It is found that most adversarial ML researchers at NeurIPS hold two fundamental assumptions that will make it difficult for them to consider socially beneficial uses of attacks: it is desirable to make systems robust, independent of context, and attackers of systems are normatively bad and defenders of systems is normatively good. Expand
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing Attack
TLDR
This work proposes a novel hard-label model stealing method termed black-box dissector, which consists of a CAM-driven erasing strategy that is designed to increase the information capacity hidden in hard labels from the victim model and a random-erasing-based self-knowledge distillation module that utilizes soft labels fromThe substitute model to mitigate overfitting. Expand
QAIR: Practical Query-efficient Black-Box Attacks for Image Retrieval
TLDR
This paper makes the first attempt in Query-based Attack against Image Retrieval (QAIR), to completely subvert the top-k retrieval results by measuring the set similarity on theTop-K retrieval results before and after attacks and guide the gradient optimization. Expand

References

SHOWING 1-10 OF 51 REFERENCES
Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box Attacks
TLDR
Experimental results show that the method can gain up to 2x and 4x reductions in the requisite mean and medium numbers of queries with much lower failure rates even if the reference models are trained on a small and inadequate dataset disjoint to the one for training the victim model. Expand
Improving Black-box Adversarial Attacks with a Transfer-based Prior
TLDR
A prior-guided random gradient-free (P-RGF) method to improve black-box adversarial attacks, which takes the advantage of a transfer-based prior and the query information simultaneously and is appropriately integrated into the algorithm by an optimal coefficient derived by a theoretical analysis. Expand
Black-box Adversarial Attacks with Limited Queries and Information
TLDR
This work defines three realistic threat models that more accurately characterize many real-world classifiers: the query-limited setting, the partial-information setting, and the label-only setting and develops new attacks that fool classifiers under these more restrictive threat models. Expand
Simple Black-box Adversarial Attacks
TLDR
It is argued that the proposed algorithm should serve as a strong baseline for future black-box attacks, in particular because it is extremely fast and its implementation requires less than 20 lines of PyTorch code. Expand
Improving Transferability of Adversarial Examples With Input Diversity
TLDR
This work proposes to improve the transferability of adversarial examples by creating diverse input patterns by applying random transformations to the input images at each iteration, and shows that the proposed attack method can generate adversarialExamples that transfer much better to different networks than existing baselines. Expand
Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors
TLDR
A framework that conceptually unifies much of the existing work on black-box attacks is introduced, and it is demonstrated that the current state-of-the-art methods are optimal in a natural sense. Expand
ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models
TLDR
An effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN is proposed, sparing the need for training substitute models and avoiding the loss in attack transferability. Expand
Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks
TLDR
A translation-invariant attack method to generate more transferable adversarial examples against the defense models, which fools eight state-of-the-art defenses at an 82% success rate on average based only on the transferability, demonstrating the insecurity of the current defense techniques. Expand
NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
TLDR
This paper proposes a black-box adversarial attack algorithm that can defeat both vanilla DNNs and those generated by various defense techniques developed recently, and outperforms state-of-the-art black- box or white-box attack methods for most test cases. Expand
Delving into Transferable Adversarial Examples and Black-box Attacks
TLDR
This work is the first to conduct an extensive study of the transferability over large models and a large scale dataset, and it is also theFirst to study the transferabilities of targeted adversarial examples with their target labels. Expand
...
1
2
3
4
5
...