Real-World Adversarial Examples involving Makeup Application
@article{Lin2021RealWorldAE, title={Real-World Adversarial Examples involving Makeup Application}, author={Changwei Lin and Chia-Yi Hsu and Pin-Yu Chen and Chia-Mu Yu}, journal={ArXiv}, year={2021}, volume={abs/2109.03329} }
Deep neural networks have developed rapidly and have achieved outstanding performance in several tasks, such as image classification and natural language processing. However, recent studies have indicated that both digital and physical adversarial examples can fool neural networks. Face-recognition systems are used in various applications that involve security threats from physical adversarial examples. Herein, we propose a physical adversarial attack with the use of full-face makeup. The…
References
SHOWING 1-10 OF 21 REFERENCES
Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition
- Computer ScienceIJCAI
- 2021
A unified adversarial face generation method - Adv-Makeup, which can realize imperceptible and transferable attack under the black-box setting, and implements a fine-grained meta-learning based adversarial attack strategy to learn more vulnerable or sensitive features from various models.
Towards Transferable Adversarial Attack Against Deep Face Recognition
- Computer ScienceIEEE Transactions on Information Forensics and Security
- 2021
This work proposes DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models and obtain ensemble-like effects in face recognition, and shows that the proposed method can significantly enhance the transferability of existing attack methods.
Generating Adversarial Examples By Makeup Attacks on Face Recognition
- Computer Science2019 IEEE International Conference on Image Processing (ICIP)
- 2019
The experimental results demonstrate that the proposed adversarial examples to attack well-trained face recognition models by applying makeup effect to face images can generate high-quality face makeup images and achieve higher error rates on various face recognition model compared to the existing attack methods.
On Adversarial Patches: Real-World Attack on ArcFace-100 Face Recognition System
- Computer Science2019 International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON)
- 2019
This paper examines security of one of the best public face recognition systems, LResNet100E-IR with ArcFace loss, and proposes a simple method to attack it in the physical world, and suggests creating an adversarial patch that can be printed, added as a face attribute and photographed.
Attacks on state-of-the-art face recognition using attentional adversarial attack generative network
- Computer ScienceMultim. Tools Appl.
- 2021
A novel GAN is introduced, Attentional Adversarial Attack Generative Network, to generate adversarial examples that mislead the network to identify someone as the target person not misclassify inconspicuously, and adds a conditional variational autoencoder and attention modules to learn the instance-level correspondences between faces.
Towards Deep Learning Models Resistant to Adversarial Attacks
- Computer ScienceICLR
- 2018
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Boosting Adversarial Attacks with Momentum
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
A broad class of momentum-based iterative algorithms to boost adversarial attacks by integrating the momentum term into the iterative process for attacks, which can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples.
Towards Evaluating the Robustness of Neural Networks
- Computer Science2017 IEEE Symposium on Security and Privacy (SP)
- 2017
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.
Adversarial T-Shirt! Evading Person Detectors in a Physical World
- Computer ScienceECCV
- 2020
This is the first work that models the effect of deformation for designing physical adversarial examples with respect to-rigid objects such as T-shirts and shows that the proposed method achieves74% and 57% attack success rates in the digital and physical worlds respectively against YOLOv2 and Faster R-CNN.
Explaining and Harnessing Adversarial Examples
- Computer ScienceICLR
- 2015
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.