Face2Face: Real-Time Face Capture and Reenactment of RGB Videos

@article{Thies2016Face2FaceRF,
  title={Face2Face: Real-Time Face Capture and Reenactment of RGB Videos},
  author={Justus Thies and Michael Zollh{\"o}fer and Marc Stamminger and Christian Theobalt and Matthias Nie{\ss}ner},
  journal={2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2016},
  pages={2387-2395}
}
We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video. [] Key Method At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we…

Figures and Tables from this paper

Demo of Face2Face: real-time face capture and reenactment of RGB videos

TLDR
A novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video) that addresses the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling and re-render the manipulated output video in a photo-realistic fashion.

Real-time Face Video Swapping From A Single Portrait

TLDR
A novel high-fidelity real-time method to replace the face in a target video clip by the face from a single source portrait image, unlike existing deep learning based methods, which does not need to pre-train any models.

Demo of FaceVR: real-time facial reenactment and eye gaze control in virtual reality

We introduce FaceVR, a novel method for gaze-aware facial reenactment in the Virtual Reality (VR) context. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture

FaceVR: Real-Time Gaze-Aware Facial Reenactment in Virtual Reality

TLDR
The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos.

HeadOn: Real-time Reenactment of Human Portrait Videos

We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze. Given a

Deep video portraits

TLDR
The first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor using only an input video is presented.

EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment

TLDR
EgoFace is presented, a radically new lightweight setup for face performance capture and front-view videorealistic reenactment using a single egocentric RGB camera that allows operations in uncontrolled environments, and lends itself to telepresence applications such as video-conferencing from dynamic environments.

Real-time 3D neural facial animation from binocular video

TLDR
The system's ability to precisely capture subtle facial motions in unconstrained scenarios is demonstrated, in comparison to competing methods, on a diverse collection of identities, expressions, and real-world environments.

Image-to-Video Generation via 3D Facial Dynamics

TLDR
This paper proposes to “imagine” a face video from a single face image according to the reconstructed 3D face dynamics, aiming to generate a realistic and identity-preserving face video, with precisely predicted pose and facial expression.

One-Shot Face Reenactment on Megapixels

TLDR
This work presents a one-shot and high-resolution face reenactment method called MegaFR, designed to control source images with 3DMM parameters, and the proposed method can be considered a controllable StyleGAN as well as a faceReenactments method.
...

References

SHOWING 1-10 OF 48 REFERENCES

Demo of Face2Face: real-time face capture and reenactment of RGB videos

TLDR
A novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video) that addresses the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling and re-render the manipulated output video in a photo-realistic fashion.

Real-time expression transfer for facial reenactment

TLDR
The novelty of the approach lies in the transfer and photorealistic re-rendering of facial deformations and detail into the target video in a way that the newly-synthesized expressions are virtually indistinguishable from a real video.

Demo of FaceVR: real-time facial reenactment and eye gaze control in virtual reality

We introduce FaceVR, a novel method for gaze-aware facial reenactment in the Virtual Reality (VR) context. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture

Automatic Face Reenactment

We propose an image-based, facial reenactment system that replaces the face of an actor in an existing target video with the face of a user from a source video, while preserving the original target

Video face replacement

TLDR
A method for replacing facial performances in video that accounts for differences in identity, visual appearance, speech, and timing between source and target videos, and uses a 3D multilinear model to track the facial performance in both videos.

Reconstructing detailed dynamic face geometry from monocular video

TLDR
This work presents a new method for capturing face geometry from monocular video that captures detailed, dynamic, spatio-temporally coherent 3D face geometry without the need for markers, and successfully reconstructs expressive motion including high-frequency face detail such as folds and laugh lines.

Realtime facial animation with on-the-fly correctives

TLDR
It is demonstrated that using an adaptive PCA model not only improves the fitting accuracy for tracking but also increases the expressiveness of the retargeted character.

Automatic acquisition of high-fidelity facial performances using monocular videos

TLDR
A facial performance capture system that automatically captures high-fidelity facial performances using uncontrolled monocular videos and uses per-pixel shading cues to add fine-scale surface details such as emerging or disappearing wrinkles and folds into large-scale facial deformation to improve the accuracy of facial reconstruction.

Face/Off: live facial puppetry

TLDR
A complete integrated system for live facial puppetry that enables high-resolution real-time facial expression tracking with transfer to another person's face and the actor becomes a puppeteer with complete and accurate control over a digital face is presented.

Real-time high-fidelity facial performance capture

TLDR
This work proposes an automatic way to detect and align the local patches required to train the regressors and run them efficiently in real-time, resulting in high-fidelity facial performance reconstruction with person-specific wrinkle details from a monocular video camera inreal-time.