• Corpus ID: 235458378

To fit or not to fit: Model-based Face Reconstruction and Occlusion Segmentation from Weak Supervision

  title={To fit or not to fit: Model-based Face Reconstruction and Occlusion Segmentation from Weak Supervision},
  author={Chunlu Li and Andreas Morel-Forster and Thomas Vetter and Bernhard Egger and Adam Kortylewski},
CHUNLU LI, Dept. of Automation, Southeast University, China Dept. of Mathematics and Informatics, University of Basel, Switzerland ANDREAS MOREL-FORSTER, Dept. of Mathematics and Informatics, University of Basel, Switzerland THOMAS VETTER, Dept. of Mathematics and Informatics, University of Basel, Switzerland BERNHARD EGGER*, Chair of Visual Computing, Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany Dept. of Brain and Cognitive Sciences, Massachusetts Institute of Technology, USA… 

Spatio-Frequency Decoupled Weak-Supervision for Face Reconstruction

A method of spatio-frequency decoupled weak-supervision for face reconstruction is proposed, which applies the losses from not only the spatial domain but also the frequency domain to learn the reconstruction process that approaches photorealistic effect based on the output shape and texture.

Towards Metrical Reconstruction of Human Faces

This work takes advantage of a face recognition network pretrained on a large-scale 2D image dataset, which provides distinct features for different faces and is robust to expression, illumination, and camera changes, and trains the face shape estimator in a supervised fashion, inheriting the robustness and generalization of the face recognitionnetwork.

FaceOcc: A Diverse, High-quality Face Occlusion Dataset for Human Face Extraction

A novel face occlusion dataset with manually labeled face Occlusion types from the CelebA-HQ and the internet and combining it with the attribute mask in CelebAMask-HQ trained a straightforward face segmentation model but obtained SOTA performance, convincingly demonstrating the attractiveness of the proposed dataset.

State of the Art in Dense Monocular Non-Rigid 3D Reconstruction

This survey focuses on state-of-the-art methods for dense non-rigid 3D reconstruction of various deformable objects and composite scenes from monocular videos or sets of monocular views.



"Look Ma, No Landmarks!" - Unsupervised, Model-Based Dense Face Alignment

This paper shows how to train an image-to-image network to predict dense correspondence between a face image and a 3D morphable model using only the model for supervision and shows that both geometric parameters and photometric parameters can be inferred directly from the correspondence map using linear least squares and the novel inverse spherical harmonic lighting model.

Towards Fast, Accurate and Stable 3D Dense Face Alignment

A novel regression framework which makes a balance among speed, accuracy and stability, and a meta-joint optimization strategy to dynamically regress a small set of 3DMM parameters, which greatly enhances speed and accuracy simultaneously.

Self-Supervised Multi-level Face Model Learning for Monocular Reconstruction at Over 250 Hz

This first approach that jointly learns a regressor for face shape, expression, reflectance and illumination on the basis of a concurrently learned parametric face model is presented, which compares favorably to the state-of-the-art in terms of reconstruction quality, better generalizes to real world faces, and runs at over 250 Hz.

Semantic 3D Reconstruction of Heads

A novel approach is presented that jointly reconstructs the geometry of a human head and semantically segments it into labels such as skin, hair and eyebrows and proposes an automatic alignment procedure for the used shape prior formulation.

Accurate 3D Face Reconstruction With Weakly-Supervised Learning: From Single Image to Image Set

A novel deep 3D face reconstruction approach that leverages a robust, hybrid loss function for weakly-supervised learning which takes into account both low-level and perception-level information for supervision, and performs multi-image face reconstruction by exploiting complementary information from different images for shape aggregation is proposed.

Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency

This work proposes an occlusion-aware view synthesis method to apply multi-view geometry consistency to self-supervised learning, and designs three novel loss functions for multi-View consistency, including the pixel consistency loss, the depth consistency lost, and the facial landmark-based epipolar loss.

Occlusion Resistant Network for 3D Face Reconstruction

A novel context-learning-based distillation approach to tackle the occlusions in the face images, which uses a weak model (unsuitable for occluded face images) to train a highly robust network towards partially and fully-occluded face images.

Towards High Fidelity Monocular Face Reconstruction with Rich Reflectance using Self-supervised Learning and Ray Tracing

This paper proposes a new method that greatly improves reconstruction quality and robustness in general scenes and achieves this by combining a CNN encoder with a differentiable ray tracer, which enables to take a big leap forward in reconstruction quality of shape, appearance and lighting even in scenes with difficult illumination.

Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network

A straightforward method that simultaneously reconstructs the 3D facial structure and provides dense alignment and surpasses other state-of-the-art methods on both reconstruction and alignment tasks by a large margin.

Occlusion-Aware 3D Morphable Models and an Illumination Prior for Face Image Analysis

This work proposes a fully automated, probabilistic and occlusion-aware 3D morphable face model adaptation framework following an analysis-by-synthesis setup and proposes a RANSAC-based robust illumination estimation technique.