JNR: Joint-based Neural Rig Representation for Compact 3D Face Modeling

@inproceedings{Vesdapunt2020JNRJN,
  title={JNR: Joint-based Neural Rig Representation for Compact 3D Face Modeling},
  author={Noranart Vesdapunt and Mitch Rundle and Hsiang-Tao Wu and Baoyuan Wang},
  booktitle={ECCV},
  year={2020}
}
In this paper, we introduce a novel approach to learn a 3D face model using a joint-based face rig and a neural skinning network. Thanks to the joint-based representation, our model enjoys some significant advantages over prior blendshape-based models. First, it is very compact such that we are orders of magnitude smaller while still keeping strong modeling capacity. Second, because each joint has its semantic meaning, interactive facial geometry editing is made easier and more intuitive. Third… 
I M Avatar: Implicit Morphable Head Avatars from Videos
TLDR
This work proposes IMavatar, a novel method for learning implicit head avatars from monocular videos that improves geometry and covers a more complete expression space compared to state-of-the-art methods.
A Revisit of Shape Editing Techniques: from the Geometric to the Neural Viewpoint
TLDR
The 3D shape into a set of curves and adopt simple methods to edit shapes while retaining geometry features and linearize the constraints that ensure regional and intuitive control in editing process, making real-time or interactive editing possible.
DAG amendment for inverse control of parametric shapes
TLDR
This paper introduces an amendment process of the underlying direct acyclic graph (DAG) of a parametric shape that enables a local differentiation of the shape w.r.t. its hyper-parameters that it leverage to provide interactive direct manipulation of the output.

References

SHOWING 1-10 OF 44 REFERENCES
An anatomically-constrained local deformation model for monocular face capture
TLDR
A new anatomically-constrained local face model and fitting approach for tracking 3D faces from 2D motion data in very high quality and introduces subspace skin thickness constraints into this model, which constrain the face to only valid expressions and helps counteract depth ambiguities in monocular tracking.
Reconstruction of Personalized 3D Face Rigs from Monocular Video
TLDR
A novel approach for the automatic creation of a personalized high-quality 3D face rig of an actor from just monocular video data, based on three distinct layers that allow the actor's facial shape as well as capture his person-specific expression characteristics at high fidelity, ranging from coarse-scale geometry to fine-scale static and transient detail on the scale of folds and wrinkles.
FML: Face Model Learning From Videos
TLDR
This work proposes multi-frame video-based self-supervised training of a deep network that learns a face identity model both in shape and appearance while jointly learning to reconstruct 3D faces.
Self-Supervised Multi-level Face Model Learning for Monocular Reconstruction at Over 250 Hz
TLDR
This first approach that jointly learns a regressor for face shape, expression, reflectance and illumination on the basis of a concurrently learned parametric face model is presented, which compares favorably to the state-of-the-art in terms of reconstruction quality, better generalizes to real world faces, and runs at over 250 Hz.
Generating 3D faces using Convolutional Mesh Autoencoders
TLDR
This work introduces a versatile model that learns a non-linear representation of a face using spectral convolutions on a mesh surface and shows that, replacing the expression space of an existing state-of-the-art face model with this model, achieves a lower reconstruction error.
High-Quality Face Capture Using Anatomical Muscles
TLDR
This work proposes modifying a recently developed rather expressive muscle-based system in order to make it fully-differentiable, which allows this physically robust and anatomically accurate muscle model to conveniently be driven by an underlying blendshape basis.
Combining 3D Morphable Models: A Large Scale Face-And-Head Model
TLDR
This work proposes two methods for solving the problem of combining two or more 3DMMs that are built using different templates that perhaps only partly overlap, have different representation capabilities and are built from different datasets that may not be publicly-available.
Learning a model of facial shape and expression from 4D scans
TLDR
Faces Learned with an Articulated Model and Expressions is low-dimensional but more expressive than the FaceWarehouse model and the Basel Face Model and is compared to these models by fitting them to static 3D scans and 4D sequences using the same optimization method.
Dictionary Learning Based 3D Morphable Model Construction for Face Recognition with Varying Expression and Pose
TLDR
A new approach for constructing a 3D morph able model (3DMM) is proposed, constructed by learning a dictionary of basis components, instead of using the traditional approach based on PCA decomposition and its application to face recognition is experimented.
A 3D Face Model for Pose and Illumination Invariant Face Recognition
TLDR
This paper publishes a generative 3D shape and texture model, the Basel Face Model (BFM), and demonstrates its application to several face recognition task and publishes a set of detailed recognition and reconstruction results on standard databases to allow complete algorithm comparisons.
...
...