TightCap: 3D Human Shape Capture with Clothing Tightness Field

@article{Chen2022TightCap3H,
  title={TightCap: 3D Human Shape Capture with Clothing Tightness Field},
  author={Xin Chen and Anqi Pang and Yang Wei and Lan Xui and Jingyi Yu},
  journal={ACM Transactions on Graphics},
  year={2022}
}
In this article, we present TightCap, a data-driven scheme to capture both the human shape and dressed garments accurately with only a single three-dimensional (3D) human scan, which enables numerous applications such as virtual try-on, biometrics, and body evaluation. To break the severe variations of the human poses and garments, we propose to model the clothing tightness field—the displacements from the garments to the human shape implicitly in the global UV texturing domain. To this end, we… Expand
Few-shot Neural Human Performance Rendering from Sparse RGBD Videos
TLDR
This paper proposes a few-shot neural human rendering approach (FNHR) from only sparse RGBD inputs, which exploits the temporal and spatial redundancy to generate photo-realistic free-view output of human activities. Expand
Image-Guided Human Reconstruction via Multi-Scale Graph Transformation Networks
TLDR
This paper proposes an efficient and effective method using a hierarchical graph transformation network to infer 3D clothed human models with consistent topologies for various poses that achieves more plausible and complete 3D human reconstruction from a single image, compared with several state-of-the-art methods. Expand
Neural Free-Viewpoint Performance Rendering under Complex Human-object Interactions
  • Guoxing Sun, Xin Chen, +6 authors Jingyi Yu
  • Computer Science
  • ACM Multimedia
  • 2021
TLDR
A neural human performance capture and rendering system to generate both high-quality geometry and photo-realistic texture of both human and objects under challenging interaction scenarios in arbitrary novel views, from only sparse RGB streams. Expand

References

SHOWING 1-10 OF 90 REFERENCES
The Naked Truth: Estimating Body Shape Under Clothing
TLDR
Results on a novel database of thousands of images of clothed and "naked" subjects, as well as sequences from the HumanEva dataset, suggest the method may be accurate enough for biometric shape analysis in video. Expand
Multi-Garment Net: Learning to Dress 3D People From Images
TLDR
Multi-Garment Network is presented, a method to predict body shape and clothing, layered on top of the SMPL model from a few frames of a video, allowing to predict garment geometry, relate it to the body shape, and transfer it to new body shapes and poses. Expand
A Generative Model of People in Clothing
TLDR
The first image-based generative model of people in clothing for the full body is presented, which sidestep the commonly used complex graphics rendering pipeline and the need for high-quality 3D scans of dressed people and is learned from a large image database. Expand
Analyzing Clothing Layer Deformation Statistics of 3D Human Motions
TLDR
It is shown that this model not only allows to reproduce previous retargeting works, but generalizes the data generation capabilities to other semantic parameters such as clothing variation and size, or physical material parameters with synthetically generated training sequence, paving the way for many kinds of capture data-driven creation and augmentation applications. Expand
Learning to Estimate 3D Human Pose and Shape from a Single Color Image
TLDR
This work addresses the problem of estimating the full body 3D human pose and shape from a single color image and proposes an efficient and effective direct prediction method based on ConvNets, incorporating a parametric statistical body shape model (SMPL) within an end-to-end framework. Expand
TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style
TLDR
TailorNet, a neural model which predicts clothing deformation in 3D as a function of three factors: pose, shape and style (garment geometry), while retaining wrinkle detail is presented, which is easy to use and fully differentiable. Expand
SMPL: a skinned multi-person linear model
TLDR
The Skinned Multi-Person Linear model (SMPL) is a skinned vertex-based model that accurately represents a wide variety of body shapes in natural human poses that is compatible with existing graphics pipelines and iscompatible with existing rendering engines. Expand
Estimating body shape of dressed humans
TLDR
A method to estimate the detailed 3D body shape of a person even if heavy or loose clothing is worn, based on ICP (iterated closest point) registration and Laplacian mesh deformation, to recover occluded or missing body parts from 3D laser scans. Expand
Estimation of Human Body Shape in Motion with Wide Clothing
TLDR
This work proposes the first automatic method to solve 3D human body shape in motion from a sequence of unstructured oriented 3D point clouds that works in the presence of loose clothing by leveraging a recent robust pose detection method. Expand
BCNet: Learning Body and Cloth Shape from A Single Image
TLDR
This paper proposes a layered garment representation on top of SMPL and novelly makes the skinning weight of garment independent of the body mesh, which significantly improves the expression ability of the garment model. Expand
...
1
2
3
4
5
...