Real-world Attack on MTCNN Face Detection System

  title={Real-world Attack on MTCNN Face Detection System},
  author={Edgar Kaziakhmedov and Klim Kireev and Grigorii Melnikov and Mikhail Aleksandrovich Pautov and Aleksandr Petiushko},
  journal={2019 International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON)},
Recent studies proved that deep learning approaches achieve remarkable results on face detection task. On the other hand, the advances gave rise to a new problem associated with the security of the deep convolutional neural network models unveiling potential risks of DCNNs based applications. Even minor input changes in the digital domain can result in the network being fooled. It was shown then that some deep learning-based face detectors are prone to adversarial attacks not only in a digital… Expand
Learning Optimization-based Adversarial Perturbations for Attacking Sequential Recognition Models
Extensive experiments show that the proposed adversarial attack on the general and popular DNN structure of CNN+RNN can effective fool several state-of-the-arts including four STR models and two IC models with higher successful rate and less time consumption, comparing to three latest attack methods. Expand
What Machines See Is Not What They Get: Fooling Scene Text Recognition Models With Adversarial Text Images
This paper proposes a novel and efficient optimization-based method that can be naturally integrated to different sequential prediction schemes, i.e., connectionist temporal classification (CTC) and attention mechanism and applies it to five state-of-the-art STR models with both targeted and untargeted attack modes. Expand
The Drowsy Testing System According to Deep-Learning
  • Jiasheng Pan
  • Computer Science
  • 2020 3rd International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE)
  • 2020
This paper optimizes the algorithm according to the YOLO v3.0 algorithm to discover whether the driver has a drowsy driving condition and has a noticeable improvement in testing precision and rate. Expand
A Systematical Solution for Face De-identification
  • Songlin Yang, Wei Wang, Yuehua Cheng, Jing Dong
  • Computer Science
  • CCBR
  • 2021
This work proposes a systematical solution compatible for face de-identification (De-ID), and adds an adversarial vector mapping network to perturb the latent code of the face image, different from previous traditional adversarial methods. Expand
Adversarial Metric Attack and Defense for Person Re-Identification
This work proposes Adversarial Metric Attack, a parallel methodology to adversarial classification attacks, and presents an early attempt of training a metric-preserving network, thereby defending the metric against adversarial attacks. Expand
Fast Facial Landmark Detection and Applications: A Survey
In this paper we survey and analyze modern neural-network-based facial landmark detection algorithms. We focus on approaches that have led to a significant increase in quality over the past few yearsExpand
Dynamic Adversarial Patch for Evading Object Detection Models
This study presents an innovative attack method against object detectors applied in a real-world setup that addresses some of the limitations of existing attacks and uses dynamic adversarial patches which are placed at multiple predetermined locations on a target object. Expand
FA: A Fast Method to Attack Real-time Object Detection Systems
This study proposes a new method named FA to generate adversarial examples of object detection models based on the generative adversarial network (GAN), and combines the classification and location information to make the generated image look as real as possible. Expand
Research on Face Detection Based on FeatherNet
Face detection is more and more widely used in academic research and industrial fields. In both fields, it is expected that the performance of face detection to be improved as much as possible on theExpand
Robust Attacks on Deep Learning Face Recognition in the Physical World
FaceAdv is proposed, a physical-world attack that crafts adversarial stickers to deceive FR systems and can significantly improve success rate of both dodging and impersonating attacks. Expand


Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection
The goal is to generate a patch that is able to successfully hide a person from a person detector, and this work is the first to attempt this kind of attack on targets with a high level of intra-class variety like persons. Expand
Physical Adversarial Examples for Object Detectors
This work improves upon a previous physical attack on image classifiers, and creates perturbed physical objects that are either ignored or mislabeled by object detection models, and implements a Disappearance Attack, which causes a Stop sign to "disappear" according to the detector. Expand
From Facial Parts Responses to Face Detection: A Deep Learning Approach
A novel deep convolutional network (DCN) that achieves outstanding performance on FDDB, PASCAL Face, and AFW, and it is shown that despite the use of DCN, the network can achieve practical runtime speed. Expand
Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks
A deep cascaded multitask framework that exploits the inherent correlation between detection and alignment to boost up their performance and achieves superior accuracy over the state-of-the-art techniques on the challenging face detection dataset and benchmark. Expand
Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples
An adversarial example attack that triggers malfunctioning of NMS in OD models is proposed that effectively stops NMS from filtering redundant bounding boxes and can be practically launched against real-world OD systems via printed posters. Expand
A Fast and Accurate Unconstrained Face Detector
A new image feature called Normalized Pixel Difference (NPD) is proposed, computed as the difference to sum ratio between two pixel values, inspired by the Weber Fraction in experimental psychology, which is scale invariant, bounded, and able to reconstruct the original image. Expand
WIDER FACE: A Face Detection Benchmark
There is a gap between current face detection performance and the real world requirements, and the WIDER FACE dataset, which is 10 times larger than existing datasets is introduced, which contains rich annotations, including occlusions, poses, event categories, and face bounding boxes. Expand
FaceNet: A unified embedding for face recognition and clustering
A system that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure offace similarity, and achieves state-of-the-art face recognition performance using only 128-bytes perface. Expand
RetinaFace: Single-stage Dense Face Localisation in the Wild
A robust single-stage face detector, named RetinaFace, which performs pixel-wise face localisation on various scales of faces by taking advantages of joint extra-supervised and self-super supervised multi-task learning. Expand
Adversarial examples in the physical world
It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples. Expand