The Creation and Detection of Deepfakes

@article{Mirsky2021TheCA,
  title={The Creation and Detection of Deepfakes},
  author={Yisroel Mirsky and Wenke Lee},
  journal={ACM Computing Surveys (CSUR)},
  year={2021},
  volume={54},
  pages={1 - 41}
}
Generative deep learning algorithms have progressed to a point where it is difficult to tell the difference between what is real and what is fake. In 2018, it was discovered how easy it is to use this technology for unethical and malicious applications, such as the spread of misinformation, impersonation of political leaders, and the defamation of innocent individuals. Since then, these “deepfakes” have advanced significantly. In this article, we explore the creation and detection of deepfakes… 

Deepfakes Generation and Detection: State-of-the-art, open challenges, countermeasures, and way forward

TLDR
This paper provides a comprehensive review and detailed analysis of existing tools and machine learning based approaches for deepfake generation and the methodologies used to detect such manipulations for both audio and visual deepfakes.

Leveraging edges and optical flow on faces for deepfake detection

TLDR
This paper builds on the XceptionNet-based deepfake detection technique that utilizes convolutional latent representations with recurrent structures, and explores how to leverage a combination of visual frames, edge maps, and dense optical flow maps together as inputs to this architecture.

Countering Malicious DeepFakes: Survey, Battleground, and Horizon

TLDR
A comprehensive overview and detailed analysis of the research work on the topic of DeepFake generation, DeepFake detection as well as evasion of Deepfake detection, with more than 318 research papers carefully surveyed is provided.

DeepTag: Robust Image Tagging for DeepFake Provenance.

TLDR
A deep learning-based approach with a simple yet effective encoder and decoder design to embed message to the facial image, which is to recover the embedded message after various drastic GAN-based DeepFake transformation with high confidence.

Detecting Cross-Modal Inconsistency to Defend against Neural Fake News

TLDR
A relatively effective approach based on detecting visual-semantic inconsistencies will serve as an effective first line of defense and a useful reference for future work in defending against machine-generated disinformation.

MagDR: Mask-guided Detection and Reconstruction for Defending Deepfakes

TLDR
MagDR, a mask-guided detection and reconstruction pipeline for defending deepfakes from adversarial attacks, shows promising performance in defending both black-box and white-box attacks.

Adversarially robust deepfake media detection using fused convolutional neural network predictions

TLDR
It is proved that prediction fusion is more robust against adversarial attacks, and if one model is compromised by an adversarial attack, the prediction fusion does not let it affect the overall classification.

DeepSonar: Towards Effective and Robust Detection of AI-Synthesized Fake Voices

TLDR
This work proposes a novel approach, named DeepSonar, based on monitoring neuron behaviors of speaker recognition system, i.e., a deep neural network (DNN), to discern AI-synthesized fake voices, and poses a new insight into adopting neuron behaviors for effective and robust AI aided multimedia fakes forensics as an inside-out approach.

Deepfake Video Detection Using Convolutional Vision Transformer

TLDR
This work proposes a Convolutional Vision Transformer for the detection of Deepfakes and adds a CNN module to the ViT architecture and has achieved a competitive result on the DFDC dataset.

Where Do Deep Fakes Look? Synthetic Face Detection via Gaze Tracking

TLDR
This paper proposes several prominent eye and gaze features that deep fakes exhibit differently and compile those features into signatures and analyze and compare those of real and fake videos, formulating geometric, visual, metric, temporal, and spectral variations.
...

References

SHOWING 1-10 OF 199 REFERENCES

Towards Generalizable Deepfake Detection with Locality-aware AutoEncoder

TLDR
Locality-Aware AutoEncoder (LAE) is proposed to bridge the generalization gap in deepfakes by using a pixel-wise mask to regularize local interpretation of LAE to enforce the model to learn intrinsic representation from the forgery region, instead of capturing artifacts in the training set and learning superficial correlations to perform detection.

DeepfakeStack: A Deep Ensemble-based Learning Technique for Deepfake Detection

  • M. RanaA. Sung
  • Computer Science
    2020 7th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/2020 6th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom)
  • 2020
TLDR
The proposed DeepfakeStack technique combines a series of DL based state-of-art classification models and creates an improved composite classifier that outperforms other classifiers by achieving an accuracy of 99.65% and AUROC of 1.0 in detecting Deepfake.

Towards Generalizable Forgery Detection with Locality-aware AutoEncoder

TLDR
Experimental results indicate that LAE indeed could focus on the forgery regions to make decisions, and results show thatLAE achieves superior generalization performance compared to state-of-the-arts on forgeries generated by alternative manipulation methods.

FakeSpotter: A Simple Baseline for Spotting AI-Synthesized Fake Faces

TLDR
The proposed FakeSpotter, based on neuron coverage behavior, in tandem with a simple linear classifier can greatly outperform deeply trained convolutional neural networks (CNNs) for spotting AI-synthesized fake faces.

Evading Deepfake-Image Detectors with White- and Black-Box Attacks

  • Nicholas CarliniH. Farid
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
  • 2020
TLDR
This work develops five attack case studies on a state- of-the-art classifier that achieves an area under the ROC curve (AUC) of 0.95 on almost all existing image generators, when only trained on one generator.

Unmasking DeepFakes with simple Features

TLDR
This work presents a simple way to detect fake face images - so-called DeepFakes, based on a classical frequency domain analysis followed by basic classifier, which shows very good results using only a few annotated training samples and even achieved good accuracies in fully unsupervised scenarios.

Detecting Deepfake Videos using Attribution-Based Confidence Metric

TLDR
The application of the state-of-the-art attribution based confidence (ABC) metric for detecting deepfake videos is proposed and utilized to characterize whether a video is original or fake.

Detection of GAN-Generated Fake Images over Social Networks

The diffusion of fake images and videos on social networks is a fast growing problem. Commercial media editing tools allow anyone to remove, add, or clone people and objects, to generate fake images.

Exploiting Human Social Cognition for the Detection of Fake and Fraudulent Faces via Memory Networks

TLDR
A Hierarchical Memory Network (HMN) architecture is proposed, which is able to successfully detect faked faces by utilising knowledge stored in neural memories as well as visual cues to reason about the perceived face and anticipate its future semantic embeddings.

Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples

TLDR
This work demonstrates that it is possible to bypass Deepfake detection methods by adversarially modifying fake videos synthesized using existing Deepfake generation methods, and presents pipelines in both white-box and black-box attack scenarios that can fool DNN based Deepfake detectors into classifying fake videos as real.
...