• Corpus ID: 233169178

Everything's Talkin': Pareidolia Face Reenactment

  title={Everything's Talkin': Pareidolia Face Reenactment},
  author={Linsen Song and Wayne Wu and Chaoyou Fu and Chen Qian and Chen Change Loy and Ran He},
We present a new application direction named Pareidolia Face Reenactment, which is defined as animating a static illusory face to move in tandem with a human face in the video. For the large differences between pareidolia face reenactment and traditional human face reenactment, two main challenges are introduced, i.e., shape variance and texture variance. In this work, we propose a novel Parametric Unsupervised Reenactment Algorithm to tackle these two challenges. Specifically, we propose to… 
Deep Person Generation: A Survey from the Perspective of Face, Pose and Cloth Synthesis
The scope of person generation is summarized, and a systematically review recent progress and technical trends in deep person generation are reviewed, covering three major tasks: talking-head generation (face), pose-guided person generation (pose) and garment-oriented persongeneration (cloth).
StyleHEAT: One-Shot High-Resolution Editable Talking Face Generation via Pre-trained StyleGAN
FEI YIN, Tsinghua Shenzhen International Graduate School, Tsinghua University, China YONG ZHANG, Tencent AI Lab, China XIAODONG CUN, Tencent AI Lab, China MINGDENG CAO, Tsinghua Shenzhen
Cloud2Sketch: Augmenting Clouds with Imaginary Sketches
This work proposes Cloud2Sketch, a novel self-supervised pipeline that augments clouds in the sky with imagined sketches with built-in free-form deformation for aligning the sketches to cloud contours.
Multi-Domain Multi-Definition Landmark Localization for Small Datasets
A Vision Transformer encoder with a novel decoder withA definition agnostic shared landmark semantic group structured prior that is learnt, as the authors train on more than one dataset concurrently, for small dataset facial localization for new and/or smaller standard datasets.


ReenactGAN: Learning to Reenact Faces via Boundary Transfer
The proposed method, known as ReenactGAN, is capable of transferring facial movements and expressions from an arbitrary person's monocular video input to a target person’s video, and can perform photo-realistic face reenactment.
Automatic Face Reenactment
We propose an image-based, facial reenactment system that replaces the face of an actor in an existing target video with the face of a user from a source video, while preserving the original target
FSGAN: Subject Agnostic Face Swapping and Reenactment
A novel recurrent neural network (RNN)-based approach for face reenactment which adjusts for both pose and expression variations and can be applied to a single image or a video sequence and uses a novel Poisson blending loss which combines Poisson optimization with perceptual loss.
Face pareidolia and its neural mechanism
: Face pareidolia refers to the compelling illusion of perceiving facial features on inanimate objects, such as an illusory face on the moon surface. Both top-down and bottom-up factors can modulate
Bringing portraits to life
A technique to automatically animate a still portrait, making it possible for the subject in the photo to come to life and express various emotions, and gives rise to reactive profiles, where people in still images can automatically interact with their viewers.
Learning Identity-Invariant Motion Representations for Cross-ID Face Reenactment
This paper proposes a unique network of CrossID-GAN to perform multi-ID face reenactment, and qualitative/quantitative results confirm the robustness and effectiveness of the model.
Everybody’s Talkin’: Let Me Talk as You Want
A method to edit a target portrait footage by taking a sequence of audio as input to synthesize a photo-realistic video, which is end-to-end learnable and robust to voice variations in the source audio.
Face2Face: Real-Time Face Capture and Reenactment of RGB Videos
A novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video) that addresses the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling and re-render the manipulated output video in a photo-realistic fashion.
FaceVR: Real-Time Gaze-Aware Facial Reenactment in Virtual Reality
The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos.
Real-time expression transfer for facial reenactment
The novelty of the approach lies in the transfer and photorealistic re-rendering of facial deformations and detail into the target video in a way that the newly-synthesized expressions are virtually indistinguishable from a real video.