• Corpus ID: 11616717

Marker-less Facial Motion Capture based on the Parts Recognition

  title={Marker-less Facial Motion Capture based on the Parts Recognition},
  author={Yasuhiro Akagi and Ryo Furukawa and Ryusuke Sagawa and Koichi Ogawara and Hiroshi Kawasaki},
  journal={J. WSCG},
A motion capture method is used to capture facial motion to create 3D animations and for recognizing facial expressions. [] Key Method To overcome this problem, we propose a marker-less motion capture method for facial motions. Since the thickness of a skin varies in each facial part, the features of the motion of the each parts also vary. These features make the non-rigid tracking problem more difficult. To prevent the problem, we recognize five types of facial parts (nose, mouth, eye, cheek and obstacle…
1 Citations

Figures and Tables from this paper

Body Parts Estimation for Motion Capture System Using Multiple Depth Sensors

The method of body parts estimation for the proposed motion capture system with multiple depth sensors is proposed and the accuracy of estimation among the features for Random Forests is compared.



Multi-scale capture of facial geometry and motion

A novel multi-scale representation and acquisition method for the animation of high-resolution facial geometry and wrinkles by augmenting a traditional marker-based facial motion-capture system by two synchronized video cameras to track expression wrinkles.

Real Time Feature Based 3-D Deformable Face Tracking

A hierarchical parameter estimation algorithm to robustly estimate both rigid and non-rigid 3D parameters is developed and the importance of both features fusion and hierarchical parameters estimation for reliable tracking 3D face deformation is shown.

Pose-space animation and transfer of facial details

This paper presents a novel method for real-time animation of highly-detailed facial expressions based on a multi-scale decomposition of facial geometry into large-scale motion and fine-scale

Leveraging motion capture and 3D scanning for high-fidelity facial performance acquisition

This paper reconstructs high-fidelity 3D facial performances by combining motion capture data with the minimal set of face scans in the blendshape interpolation framework to efficiently build dense consistent surface correspondences across all the face scans.

Realtime performance-based facial animation

A novel face tracking algorithm that combines geometry and texture registration with pre-recorded animation priors in a single optimization is introduced that demonstrates that compelling 3D facial dynamics can be reconstructed in realtime without the use of face markers, intrusive lighting, or complex scanning hardware.

Real-time facial feature detection using conditional regression forests

In the authors' experiments, it is demonstrated that conditional regression forests outperform regression forests for facial feature detection and close-to-human accuracy is achieved while processing images in real-time.

Real-time human pose recognition in parts from single depth images

This work takes an object recognition approach, designing an intermediate body parts representation that maps the difficult pose estimation problem into a simpler per-pixel classification problem, and generates confidence-scored 3D proposals of several body joints by reprojecting the classification result and finding local modes.

Robust single-view geometry and motion reconstruction

The method makes use of a smooth template that provides a crude approximation of the scanned object and serves as a geometric and topological prior for reconstruction that allows faithful recovery of small-scale shape and motion features leading to a high-quality reconstruction.

Animated deformations with radial basis functions

A novel approach to creating deformations of polygonal models using Radial Basis Functions (RBFs) to produce localized real-time deformations to create facial expressions for virtual environment applications such as an immersive teleconferencing system or entertainment.