ReenactGAN: Learning to Reenact Faces via Boundary Transfer

  title={ReenactGAN: Learning to Reenact Faces via Boundary Transfer},
  author={Wayne Wu and Yunxuan Zhang and Cheng Li and Chen Qian and Chen Change Loy},
We present a novel learning-based framework for face reenactment. [] Key Method Instead of performing a direct transfer in the pixel space, which could result in structural artifacts, we first map the source face onto a boundary latent space. A transformer is subsequently used to adapt the source face’s boundary to the target’s boundary. Finally, a target-specific decoder is used to generate the reenacted target face.
One-shot Face Reenactment
This work proposes a novel one-shot face reenactment learning framework that achieves superior transfer fidelity as well as identity preserving capability than alternatives and achieves competitive results to those using a set of target images.
FaceSwapNet: Landmark Guided Many-to-Many Face Reenactment
A novel many-to-many face reenactment framework, named FaceSwapNet, which allows transferring facial expressions and movements from one source face to arbitrary targets and a novel triplet perceptual loss is proposed to force the generator to learn geometry and appearance information simultaneously.
Mesh Guided One-shot Face Reenactment Using Graph Convolutional Networks
A method for one-shot face reenactment, which uses the reconstructed 3D meshes as guidance to learn the optical flow needed for the reenacted face synthesis and outperforms state-of-the-art methods in both qualitative and quantitative comparisons.
One-Shot Face Reenactment on Megapixels
This work presents a one-shot and high-resolution face reenactment method called MegaFR, designed to control source images with 3DMM parameters, and the proposed method can be considered a controllable StyleGAN as well as a faceReenactments method.
ActGAN: Flexible and Efficient One-shot Face Reenactment
This paper introduces ActGAN – a novel end-to-end generative adversarial network (GAN) for one-shot face reenactment and introduces a solution to preserve a person’s identity between synthesized and target person by adopting the state-of-the-art approach in deep face recognition domain.
Realistic Face Reenactment via Self-Supervised Disentangling of Identity and Pose
A novel self-supervised hybrid model (DAE-GAN) that learns how to reenact face naturally given large amounts of unlabeled videos and leverages the disentangled features to generate photo-realistic and pose-alike face images.
One-shot Face Reenactment Using Appearance Adaptive Normalization
A novel generative adversarial network for one-shot face reenactment that can animate a single face image to a different pose-and-expression (provided by a driving image) while keeping its original appearance.
FReeNet: Multi-Identity Face Reenactment
A new triplet perceptual loss is proposed to force the GAG module to learn appearance and geometry information simultaneously, which also enriches facial details of the reenacted images.
Unified Application of Style Transfer for Face Swapping and Reenactment
This paper introduces a unified end-to-end pipeline for face swapping and reenactment and proposes a novel approach to isolated disentangled representation learning of specific visual attributes in an unsupervised manner.
Thinking the Fusion Strategy of Multi-reference Face Reenactment
This work shows that simple extension by using multiple reference images significantly improves generation quality and conducts the reconstruction task on publicly available dataset and uses a newly proposed evaluation metric to validate that the method achieves better quantitative results.


Automatic Face Reenactment
We propose an image-based, facial reenactment system that replaces the face of an actor in an existing target video with the face of a user from a source video, while preserving the original target
Face2Face: real-time face capture and reenactment of RGB videos
A novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video) that addresses the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling and re-render the manipulated output video in a photo-realistic fashion.
Demo of FaceVR: real-time facial reenactment and eye gaze control in virtual reality
We introduce FaceVR, a novel method for gaze-aware facial reenactment in the Virtual Reality (VR) context. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture
Real-time expression transfer for facial reenactment
The novelty of the approach lies in the transfer and photorealistic re-rendering of facial deformations and detail into the target video in a way that the newly-synthesized expressions are virtually indistinguishable from a real video.
Deep video portraits
The first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor using only an input video is presented.
DeepCoder: Semi-Parametric Variational Autoencoders for Automatic Facial Action Coding
A novel VAE semi-parametric modeling framework, named DeepCoder, is proposed, which combines the modeling power of parametric (convolutional) and non- parameters, and outperforms the state-of-the-art approaches, and related VAEs and deep learning models.
Unconstrained Face Alignment via Cascaded Compositional Learning
This work partitions the optimisation space into multiple domains of homogeneous descent, and predicts a shape as a composition of estimations from multiple domain-specific regressors to equip cascaded regressors with the capability to handle global shape variation and irregular appearance-shape relation in the unconstrained scenario.
Mnemonic Descent Method: A Recurrent Process Applied for End-to-End Face Alignment
This paper proposes a combined and jointly trained convolutional recurrent neural network architecture that allows the training of an end-to-end to system that attempts to alleviate the drawbacks of cascaded regression.
Disentangling 3D Pose in a Dendritic CNN for Unconstrained 2D Face Alignment
  • Amit Kumar, R. Chellappa
  • Computer Science
    2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2018
A single dendritic CNN, termed as Pose Conditioned Dendritic Convolution Neural Network (PCD-CNN), where a classification network is followed by a second and modular classification network, trained in an end to end fashion to obtain accurate landmark points is presented.
DeepFace: Closing the Gap to Human-Level Performance in Face Verification
This work revisits both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network.