Corpus ID: 209515957

FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping

@article{Li2019FaceShifterTH,
  title={FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping},
  author={Lingzhi Li and Jianmin Bao and Hao Yang and Dong Chen and Fang Wen},
  journal={ArXiv},
  year={2019},
  volume={abs/1912.13457}
}
In this work, we propose a novel two-stage framework, called FaceShifter, for high fidelity and occlusion aware face swapping. Unlike many existing face swapping works that leverage only limited information from the target image when synthesizing the swapped face, our framework, in its first stage, generates the swapped face in high-fidelity by exploiting and integrating the target attributes thoroughly and adaptively. We propose a novel attributes encoder for extracting multi-level target face… Expand
DFGC 2021: A DeepFake Game Competition
TLDR
The organization, results and top solutions of this competition are presented and the insights obtained during this event are shared and released to further benefit the research community. Expand
HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping
TLDR
This work proposes 3D shape-aware identity to control the face shape with the geometric supervision from 3DMM and 3D face reconstruction method and introduces the Semantic Facial Fusion module to optimize the combination of encoder and decoder features and make adaptive blending, which makes the results more photo-realistic. Expand
One-stage Context and Identity Hallucination Network
TLDR
A novel one-stage context and identity hallucination network, which learns a series of hallucination maps to softly divide the context areas and identity areas and designs a novel two-cascading AdaIN to transfer the identity while retaining the context. Expand
PRNU-based Deepfake Detection
TLDR
This work performs the first large scale test of PRNU-based deepfake detection methods on a variety of standard datasets and shows the impact of often neglected parameters of the face extraction stage on detection accuracy. Expand
Disentangling in Latent Space by Harnessing a Pretrained Generator
TLDR
This paper presents a method that learns how to represent data in a disentangled way, with minimal supervision, manifested solely using available pre-trained networks, by employing a leading pre- trained unconditional image generator, such as StyleGAN. Expand
SimSwap: An Efficient Framework For High Fidelity Face Swapping
TLDR
An efficient framework, called Simple Swap (SimSwap), aiming for generalized and high fidelity face swapping, which is capable of transferring the identity of an arbitrary source face into an arbitrary target face while preserving the attributes of the target face. Expand
FICGAN: Facial Identity Controllable GAN for De-identification
  • Yonghyun Jeong, Jooyoung Choi, +5 authors Sungroh Yoon
  • Computer Science
  • ArXiv
  • 2021
TLDR
This work presents Facial Identity Controllable GAN (FICGAN), an autoencoder-based conditional generative model that learns to disentangle the identity attributes from non-identity attributes on a face image and achieves enhanced privacy protection in de-identified face images. Expand
SPGAN: Face Forgery Using Spoofing Generative Adversarial Networks
Current face spoof detection schemes mainly rely on physiological cues such as eye blinking, mouth movements, and micro-expression changes, or textural attributes of the face images [9]. But none ofExpand
ShapeEditer: a StyleGAN Encoder for Face Swapping
TLDR
A novel encoder for highresolution, realistic and high-fidelity face exchange using an advanced pretrained high-quality random face image generator, i.e. StyleGAN, as backbone, and a set of self-supervised loss functions with which the training data do not need to be labeled manually. Expand
Face identity disentanglement via latent space mapping
TLDR
This paper presents a method that learns how to represent data in a disentangled way, with minimal supervision, manifested solely using available pre-trained networks, by employing a leading pre- trained unconditional image generator, such as StyleGAN. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 41 REFERENCES
FSGAN: Subject Agnostic Face Swapping and Reenactment
TLDR
A novel recurrent neural network (RNN)-based approach for face reenactment which adjusts for both pose and expression variations and can be applied to a single image or a video sequence and uses a novel Poisson blending loss which combines Poisson optimization with perceptual loss. Expand
FaceForensics++: Learning to Detect Manipulated Facial Images
TLDR
This paper proposes an automated benchmark for facial manipulation detection, and shows that the use of additional domain-specific knowledge improves forgery detection to unprecedented accuracy, even in the presence of strong compression, and clearly outperforms human observers. Expand
On Face Segmentation, Face Swapping, and Face Perception
TLDR
It is shown that a standard fully convolutional network (FCN) can achieve remarkably fast and accurate segmentations, provided that it is trained on a rich enough example set. Expand
RSGAN: face swapping and editing using face and hair representation in latent spaces
TLDR
This paper introduces a generative neural network for face swapping and editing face images that synthesizes synthesize a natural face image from an arbitrary pair of face and hair appearances. Expand
Delving into egocentric actions
TLDR
A novel set of egocentric features are presented and shown how they can be combined with motion and object features and a significant performance boost over all previous state-of-the-art methods is uncovered. Expand
Lending A Hand: Detecting Hands and Recognizing Activities in Complex Egocentric Interactions
TLDR
This work develops methods to locate and distinguish between hands in egocentric video using strong appearance models with Convolutional Neural Networks, and introduces a simple candidate region generation approach that outperforms existing techniques at a fraction of the computational cost. Expand
ShapeNet: An Information-Rich 3D Model Repository
TLDR
ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy, a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Expand
Learning to Predict Gaze in Egocentric Video
TLDR
A model for gaze prediction in egocentric video is presented by leveraging the implicit cues that exist in camera wearer's behaviors and model the dynamic behavior of the gaze, in particular fixations, as latent variables to improve the gaze prediction. Expand
Learning to recognize objects in egocentric activities
TLDR
The key to this approach is a robust, unsupervised bottom up segmentation method, which exploits the structure of the egocentric domain to partition each frame into hand, object, and background categories and uses Multiple Instance Learning to match object instances across sequences. Expand
Exchanging Faces in Images
TLDR
This work presents a system that exchanges faces across large differences in viewpoint and illumination, based on an algorithm that estimates 3D shape and texture along with all relevant scene parameters, such as pose and lighting, from single images. Expand
...
1
2
3
4
5
...