Metamorphic Testing-based Adversarial Attack to Fool Deepfake Detectors

  title={Metamorphic Testing-based Adversarial Attack to Fool Deepfake Detectors},
  author={Nyee Thoang Lim and Meng Yi Kuan and Muxin Pu and Mei Kuan Lim and Chun Yong Chong},
—Deepfakes utilise Artificial Intelligence (AI) tech- niques to create synthetic media where the likeness of one person is replaced with another. There are growing concerns that deepfakes can be maliciously used to create misleading and harmful digital contents. As deepfakes become more common, there is a dire need for deepfake detection technology to help spot deepfake media. Present deepfake detection models are able to achieve outstanding accuracy ( > 90%). However, most of them are limited… 


Adversarial Threats to DeepFake Detection: A Practical Perspective
This work studies the extent to which adversarial perturbations transfer across different models and proposes techniques to improve the transferability of adversarial examples, and creates more accessible attacks using Universal Adversarial Perturbations which pose a very feasible attack scenario since they can be easily shared amongst attackers.
One Pixel Attack for Fooling Deep Neural Networks
This paper proposes a novel method for generating one-pixel adversarial perturbations based on differential evolution (DE), which requires less adversarial information (a black-box attack) and can fool more types of networks due to the inherent features of DE.
Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples
This work demonstrates that it is possible to bypass Deepfake detection methods by adversarially modifying fake videos synthesized using existing Deepfake generation methods, and presents pipelines in both white-box and black-box attack scenarios that can fool DNN based Deepfake detectors into classifying fake videos as real.
Adversarial Attacks on Deep-learning Models in Natural Language Processing
A systematic survey on preliminary knowledge of NLP and related seminal works in computer vision is presented, which collects all related academic works since the first appearance in 2017 and analyzes 40 representative works in a comprehensive way.
Metamorphic Detection of Adversarial Examples in Deep Learning Models with Affine Transformations
The proposed approach can determine whether or not the input image is adversarial with a high degree of accuracy and can detect image manipulations that are so small, that they are impossible to detect by a human through visual inspection.
Robustness Evaluation of Stacked Generative Adversarial Networks using Metamorphic Testing
The proposed metamorphic relations can be applied to other text-to-image synthesis models to not only verify the robustness but also to help researchers understand and interpret the results made by the machine learning models.
A Study on Combating Emerging Threat of Deepfake Weaponization
  • R. KataryaAnushka Lal
  • Computer Science
    2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC)
  • 2020
SSTNet is presented as the best model to date, that uses spatial, temporal, and steganalysis for detection of deepfakes and the threat posed by document and signature forgery is highlighted.
Deep Fake Image Detection Based on Pairwise Learning
This paper proposes a deep learning-based approach for detecting the fake images by using the contrastive loss and demonstrated that the proposed method significantly outperformed other state-of-the-art fake image detectors.
Exploring Adversarial Robustness of Multi-Sensor Perception Systems in Self Driving
This paper showcases practical susceptibilities of multi-sensor detection by inserting an adversarial object on a host vehicle, focuses on physically realizable and input-agnostic attacks that are feasible to execute in practice, and shows that a single universal adversary can hide different host vehicles from state-of-the-art multi-modal detectors.