Corpus ID: 236428572

Benign Adversarial Attack: Tricking Algorithm for Goodness

@article{Zhao2021BenignAA,
  title={Benign Adversarial Attack: Tricking Algorithm for Goodness},
  author={Xian Zhao and Jiaming Zhang and Zhiyu Lin and Jitao Sang},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.11986}
}
In spite of the successful application in many fields, machine learning algorithms today suffer from notorious problems like vulnerability to adversarial examples. Beyond falling into the cat-and-mouse game between adversarial attack and defense, this paper provides alternative perspective to consider adversarial example and explore whether we can exploit it in benign applications. We first propose a novel taxonomy of visual information along task-relevance and semantic-orientation. Theโ€ฆย Expand
Reversible adversarial examples against local visual perturbation
  • Zhaoxia Yin, Li Chen, Shaowei Zhu
  • Computer Science
  • ArXiv
  • 2021
TLDR
This article generates reversible adversarial examples for local visual adversarial perturbation, and uses reversible data embedding technology to embed the information needed to restore the original image into the adversarialExamples to generate examples that are both adversarial and reversible. Expand
Trustworthy Multimedia Analysis
  • Xiaowen Huang, Jiaming Zhang, Yi Zhang, Xian Zhao, J. Sang
  • Computer Science
  • ACM Multimedia
  • 2021
TLDR
This tutorial discusses the trustworthiness issue in multimedia analysis by partitioning the (visual) feature space along two dimensions of task-relevance and semantic-orientation and introducing two types of spurious correlations. Expand

References

SHOWING 1-10 OF 51 REFERENCES
Adversarial Attacks on Neural Network Policies
TLDR
This work shows existing adversarial example crafting techniques can be used to significantly degrade test-time performance of trained policies, even with small adversarial perturbations that do not interfere with human perception. Expand
Boosting Adversarial Attacks with Momentum
TLDR
A broad class of momentum-based iterative algorithms to boost adversarial attacks by integrating the momentum term into the iterative process for attacks, which can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. Expand
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee. Expand
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
TLDR
The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN, and analytically investigates the generalizability and robustness properties granted by the use of defensive Distillation when training DNNs. Expand
Improving Transferability of Adversarial Examples With Input Diversity
TLDR
This work proposes to improve the transferability of adversarial examples by creating diverse input patterns by applying random transformations to the input images at each iteration, and shows that the proposed attack method can generate adversarialExamples that transfer much better to different networks than existing baselines. Expand
Adversarial Transformation Networks: Learning to Generate Adversarial Examples
TLDR
This work efficiently train feed-forward neural networks in a self-supervised manner to generate adversarial examples against a target network or set of networks, and calls such a network an Adversarial Transformation Network (ATN). Expand
Towards Evaluating the Robustness of Neural Networks
TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced. Expand
Robust CAPTCHAs Towards Malicious OCR
TLDR
This study attempts to employ the limitations of algorithm to design robust CAPTCHA questions easily solvable to human, and finds that adversarial perturbation is significantly annoying to algorithm yet friendly to human. Expand
Practical Black-Box Attacks against Machine Learning
TLDR
This work introduces the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge, and finds that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder. Expand
One Pixel Attack for Fooling Deep Neural Networks
TLDR
This paper proposes a novel method for generating one-pixel adversarial perturbations based on differential evolution (DE), which requires less adversarial information (a black-box attack) and can fool more types of networks due to the inherent features of DE. Expand
...
1
2
3
4
5
...