• Corpus ID: 237440180

Real-World Adversarial Examples involving Makeup Application

  title={Real-World Adversarial Examples involving Makeup Application},
  author={Changwei Lin and Chia-Yi Hsu and Pin-Yu Chen and Chia-Mu Yu},
Deep neural networks have developed rapidly and have achieved outstanding performance in several tasks, such as image classification and natural language processing. However, recent studies have indicated that both digital and physical adversarial examples can fool neural networks. Face-recognition systems are used in various applications that involve security threats from physical adversarial examples. Herein, we propose a physical adversarial attack with the use of full-face makeup. The… 

Figures from this paper



Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition

A unified adversarial face generation method - Adv-Makeup, which can realize imperceptible and transferable attack under the black-box setting, and implements a fine-grained meta-learning based adversarial attack strategy to learn more vulnerable or sensitive features from various models.

Towards Transferable Adversarial Attack Against Deep Face Recognition

This work proposes DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models and obtain ensemble-like effects in face recognition, and shows that the proposed method can significantly enhance the transferability of existing attack methods.

Generating Adversarial Examples By Makeup Attacks on Face Recognition

The experimental results demonstrate that the proposed adversarial examples to attack well-trained face recognition models by applying makeup effect to face images can generate high-quality face makeup images and achieve higher error rates on various face recognition model compared to the existing attack methods.

On Adversarial Patches: Real-World Attack on ArcFace-100 Face Recognition System

This paper examines security of one of the best public face recognition systems, LResNet100E-IR with ArcFace loss, and proposes a simple method to attack it in the physical world, and suggests creating an adversarial patch that can be printed, added as a face attribute and photographed.

Attacks on state-of-the-art face recognition using attentional adversarial attack generative network

A novel GAN is introduced, Attentional Adversarial Attack Generative Network, to generate adversarial examples that mislead the network to identify someone as the target person not misclassify inconspicuously, and adds a conditional variational autoencoder and attention modules to learn the instance-level correspondences between faces.

Towards Deep Learning Models Resistant to Adversarial Attacks

This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.

Boosting Adversarial Attacks with Momentum

A broad class of momentum-based iterative algorithms to boost adversarial attacks by integrating the momentum term into the iterative process for attacks, which can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples.

Towards Evaluating the Robustness of Neural Networks

It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.

Adversarial T-Shirt! Evading Person Detectors in a Physical World

This is the first work that models the effect of deformation for designing physical adversarial examples with respect to-rigid objects such as T-shirts and shows that the proposed method achieves74% and 57% attack success rates in the digital and physical worlds respectively against YOLOv2 and Faster R-CNN.

Explaining and Harnessing Adversarial Examples

It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.