SfSNet: Learning Shape, Reflectance and Illuminance of Faces 'in the Wild'

@article{Sengupta2018SfSNetLS,
  title={SfSNet: Learning Shape, Reflectance and Illuminance of Faces 'in the Wild'},
  author={Soumyadip Sengupta and Angjoo Kanazawa and Carlos D. Castillo and David W. Jacobs},
  journal={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2018},
  pages={6296-6305}
}
We present SfSNet, an end-to-end learning framework for producing an accurate decomposition of an unconstrained human face image into shape, reflectance and illuminance. [] Key Method SfSNet consists of a new decomposition architecture with residual blocks that learns a complete separation of albedo and normal. This is used along with the original image to predict lighting. SfSNet produces significantly better quantitative and qualitative results than state-of-the-art methods for inverse rendering and…

Hybrid Face Reflectance, Illumination, and Shape From a Single Image

This work proposes HyFRIS-Net to jointly estimate the hybrid reflectance and illumination models, as well as the refined face shape from a single unconstrained face image in a pre-defined texture space to ensure photometric face appearance modeling in both parametric and non-parametric spaces for efficient learning.

Learning Inverse Rendering of Faces from Real-world Videos

This paper proposes a weakly supervised training approach to train the model on real face videos, based on the assumption of consistency of albedo and normal across different frames, thus bridging the gap between real and synthetic face images.

S2F2: Self-Supervised High Fidelity Face Reconstruction from Monocular Image

This work achieves, for the first time, high fidelity face reconstruction using self-supervised learning only, and allows it to solve the challenging problem of decoupling face reflectance from geometry using a single image, at high computational speed.

High-fidelity facial reflectance and geometry inference from an unconstrained image

A deep learning-based technique to infer high-quality facial reflectance and geometry given a single unconstrained image of the subject, which may contain partial occlusions and arbitrary illumination conditions, and demonstrates the rendering of high-fidelity 3D avatars from a variety of subjects captured under different lighting conditions.

Face Inverse Rendering from Single Images in the Wild

This paper proposes a novel face inverse rendering framework, which neither relies on complex devices nor labeled training data, Instead, it learns reflectance, shape, and illuminance from its physical constraints.

FML: Face Model Learning From Videos

This work proposes multi-frame video-based self-supervised training of a deep network that learns a face identity model both in shape and appearance while jointly learning to reconstruct 3D faces.

A Deep Facial BRDF Estimation Method Based on Image Translation

This article proposes a method for estimating the facial reflection properties of a single portrait image based on image translation that achieves superior performance compared to the state-of-the-art methods.

AvatarMe++: Facial Shape and BRDF Inference with Photorealistic Rendering-Aware GANs

The first method that is able to reconstruct photorealistic render-ready 3D facial geometry and BRDF from a single ‘`in-the-wild’' image is introduced and outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from asingle low-resolution image, that can be rendered in various applications, and bridge the uncanny valley.

Learning Physically-based Material and Lighting Decompositions for Face Editing

To allow fast and controllable reflectance and lighting editing, a physically-based decomposition through deep learned priors from path-traced portrait images is formed, which can better represent the true appearance function than simpler baseline methods, leading to better generalization and higher-quality editing.

DeRenderNet: Intrinsic Image Decomposition of Urban Scenes with Shape-(In)dependent Shading Rendering

Compared with state-of-the-art intrinsic image decomposition methods, DeRenderNet produces shadow-free albedo maps with clean details and an accurate prediction of shadows in the shape-independent shading, which is shown to be effective in re- rendering and improving the accuracy of high-level vision tasks for urban scenes.
...

References

SHOWING 1-10 OF 44 REFERENCES

Deep Lambertian Networks

A multilayer generative model where the latent variables include the albedo, surface normals, and the light source is introduced, and it is demonstrated that this model is able to generalize as well as improve over standard baselines in one-shot face recognition.

Photorealistic Facial Texture Inference Using Deep Neural Networks

A data-driven inference method is presented that can synthesize a photorealistic texture map of a complete 3D face model given a partial 2D view of a person in the wild and successful face reconstructions from a wide range of low resolution input images are demonstrated.

Self-Supervised Intrinsic Image Decomposition

This paper proposes a model that joins together an image decomposition pipeline, which predicts reflectance, shape, and lighting conditions given a single image, with a recombination function, a learned shading model used to recompose the original input based off of intrinsic image predictions, and a network that can use unsupervised reconstruction error as an additional signal to improve its intermediate representations.

MoFA: Model-Based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction

A novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3D human face from a single in-the-wild color image and can be trained end-to-end in an unsupervised manner, which renders training on very large real world data feasible.

Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation

An Image-to-Image translation network that jointly maps the input image to a depth image and a facial correspondence map can be utilized to provide high quality reconstructions of diverse faces under extreme expressions, using a purely geometric refinement process.

Neural Face Editing with Intrinsic Image Disentangling

An end-to-end generative adversarial network is proposed that infers a face-specific disentangled representation of intrinsic face properties, including shape, albedo, and lighting, and an alpha matte, and it is shown that this network can be trained on in thewild images by incorporating an in-network physically-based image formation module and appropriate loss functions.

Face reconstruction in the wild

This work addresses the problem of reconstructing 3D face models from large unstructured photo collections, e.g., obtained by Google image search or from personal photo collections in iPhoto, and leverages multi-image shading, but unlike traditional photometric stereo approaches, allows for changes in viewpoint and shape.

Direct Intrinsics: Learning Albedo-Shading Decomposition by Convolutional Regression

The strategy is to learn a convolutional neural network that directly predicts output albedo and shading channels from an input RGB image patch, which outperforms all prior work, including methods that rely on RGB+Depth input.

Adaptive 3D Face Reconstruction from Unconstrained Photo Collections

This paper fits a 3D Morphable Model to form a personalized template and develops a novel photometric stereo formulation, under a coarse-to-fine scheme, to adapt to low quality photo collections with fewer images.

Real-Time Facial Segmentation and Performance Capture from RGB Input

A state-of-the-art regression-based facial tracking framework with segmented face images as training is adopted, and accurate and uninterrupted facial performance capture is demonstrated in the presence of extreme occlusion and even side views.