LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human Bodies

@article{Lombardi2021LatentHumanSD,
  title={LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human Bodies},
  author={Sandro Lombardi and Bangbang Yang and Tianxing Fan and Hujun Bao and Guofeng Zhang and Marc Pollefeys and Zhaopeng Cui},
  journal={2021 International Conference on 3D Vision (3DV)},
  year={2021},
  pages={278-288}
}
3D representation and reconstruction of human bodies have been studied for a long time in computer vision. Traditional methods rely mostly on parametric statistical linear models, limiting the space of possible bodies to linear combinations. It is only recently that some approaches try to leverage neural implicit representations for human body modeling, and while demonstrating impressive results, they are either limited by representation capability or not physically meaningful and controllable… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 71 REFERENCES
NPMs: Neural Parametric Models for 3D Deformable Shapes
TLDR
This work proposes Neural Parametric Models (NPMs), a novel, learned alternative to traditional, parametric 3D models, which does not require handcrafted, object-specific constraints and learns to disentangle 4D dynamics into latent-space representations of shape and pose, leveraging the flexibility of recent developments in learned implicit functions.
Unsupervised Shape and Pose Disentanglement for 3D Meshes
TLDR
A combination of self-consistency and cross- Consistency constraints to learn pose and shape space from registered meshes and incorporates as-rigid-as-possible deformation(ARAP) into the training loop to avoid degenerate solutions.
LEAP: Learning Articulated Occupancy of People
TLDR
Experiments show that the canonicalized occupancy estimation with the learned LBS functions greatly improves the generalization capability of the learned occupancy representation across various human shapes and poses, outperforming existing solutions in all settings.
STAR: Sparse Trained Articulated Human Body Regressor
TLDR
This work defines per-joint pose correctives and learns the subset of mesh vertices that are influenced by each joint movement that results in more realistic deformations and significantly reduces the number of model parameters to 20% of SMPL.
Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans
TLDR
Neural Body is proposed, a new human body representation which assumes that the learned neural representations at different frames share the same set of latent codes anchored to a deformable mesh, so that the observations across frames can be naturally integrated.
GHUM & GHUML: Generative 3D Human Shape and Articulated Pose Models
TLDR
A statistical, articulated 3D human shape modeling pipeline, within a fully trainable, modular, deep learning framework, that supports facial expression analysis, as well as body shape and pose estimation.
Disentangled Human Body Embedding Based on Deep Hierarchical Neural Network
TLDR
An autoencoder-like network architecture is presented to learn disentangled shape and pose embedding specifically for the 3D human body to improve the reconstruction accuracy and construct a large dataset of human body models with consistent connectivity for the learning of the neural network.
SMPLpix: Neural Avatars from 3D Human Models
TLDR
This work trains a network that directly converts a sparse set of 3D mesh vertices into photorealistic images, alleviating the need for traditional rasterization mechanism and shows the advantage over conventional differentiable renderers both in terms of the level of photorealism and rendering efficiency.
SMPL: a skinned multi-person linear model
TLDR
The Skinned Multi-Person Linear model (SMPL) is a skinned vertex-based model that accurately represents a wide variety of body shapes in natural human poses that is compatible with existing graphics pipelines and iscompatible with existing rendering engines.
Learning Implicit Fields for Generative Shape Modeling
  • Zhiqin Chen, Hao Zhang
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
By replacing conventional decoders by the implicit decoder for representation learning and shape generation, this work demonstrates superior results for tasks such as generative shape modeling, interpolation, and single-view 3D reconstruction, particularly in terms of visual quality.
...
...