ClothCap: seamless 4D clothing capture and retargeting

@article{PonsMoll2017ClothCapS4,
  title={ClothCap: seamless 4D clothing capture and retargeting},
  author={Gerard Pons-Moll and Sergi Pujades and Sonny Hu and Michael J. Black},
  journal={ACM Trans. Graph.},
  year={2017},
  volume={36},
  pages={73:1-73:15}
}
Designing and simulating realistic clothing is challenging. Previous methods addressing the capture of clothing from 3D scans have been limited to single garments and simple motions, lack detail, or require specialized texture patterns. Here we address the problem of capturing regular clothing on fully dressed people in motion. People typically wear multiple pieces of clothing at a time. To estimate the shape of such clothing, track it over time, and render it believably, each garment must be… Expand
Explicit Clothing Modeling for an Animatable Full-Body Avatar
TLDR
Photographs courtesy of Facebook Reality Labs Research, Carnegie Mellon University, USA and Facebook AI Research. Expand
Modeling Clothing as a Separate Layer for an Animatable Human Avatar
TLDR
Photographs courtesy of Facebook Reality Labs Research, Carnegie Mellon University, USA and Facebook AI Research. Expand
CLOTH3D: Clothed 3D Humans
TLDR
A Conditional Variational Auto-Encoder based on graph convolutions (GCVAE) to learn garment latent spaces allows for realistic generation of 3D garments on top of SMPL model for any pose and shape. Expand
HUMBI: A Large Multiview Dataset of Human Body Expressions
  • Zhixuan Yu, J. S. Yoon, +4 authors H. Park
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
HUMBI is demonstrated to be highly effective in learning and reconstructing a complete human model and is complemen- tary to the existing datasets of human body expressions with limited views and subjects such as MPII-Gaze, Multi-PIE, Human3.6M, and Panoptic Studio datasets. Expand
TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style
TLDR
TailorNet, a neural model which predicts clothing deformation in 3D as a function of three factors: pose, shape and style (garment geometry), while retaining wrinkle detail is presented, which is easy to use and fully differentiable. Expand
The Virtual Tailor: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style
TLDR
TailorNet, a neural model which predicts clothing deformation in 3D as a function of three factors: pose, shape and style (garment geometry), while retaining wrinkle detail is presented, which is easy to use and fully differentiable. Expand
Multi-Garment Net: Learning to Dress 3D People From Images
TLDR
Multi-Garment Network is presented, a method to predict body shape and clothing, layered on top of the SMPL model from a few frames of a video, allowing to predict garment geometry, relate it to the body shape, and transfer it to new body shapes and poses. Expand
Analyzing Clothing Layer Deformation Statistics of 3D Human Motions
TLDR
It is shown that this model not only allows to reproduce previous retargeting works, but generalizes the data generation capabilities to other semantic parameters such as clothing variation and size, or physical material parameters with synthetically generated training sequence, paving the way for many kinds of capture data-driven creation and augmentation applications. Expand
DeepWrinkles: Accurate and Realistic Clothing Modeling
TLDR
An entirely data-driven approach to realistic cloth wrinkle generation is claimed, which leads to unprecedented high-quality rendering of clothing deformation sequences, where fine wrinkles from (real) high resolution observations can be recovered. Expand
Recovery of the 3D Virtual Human: Monocular Estimation of 3D Shape and Pose with Data Driven Priors
TLDR
This thesis investigates the problem of reconstructing the 3D virtual human from monocular imagery, mainly coming from an RGB sensor, and shows how to train and refine unsupervised with unlabeled real data, by integrating lightweight differentiable renderers into CNNs. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 65 REFERENCES
Metric Regression Forests for Correspondence Estimation
TLDR
A new method for inferring dense data to model correspondences, focusing on the application of human pose estimation from depth images, that leads to correspondences that are considerably more accurate than state of the art, using far fewer training images. Expand
Civilian American and European Surface Anthropometry Resource (CAESAR), Final Report. Volume 1. Summary
Abstract : The Civilian Americana and European Surface Anthropometry Resource (CAESAR) project was a survey of the civilian populations of three countries representing the North Atlantic TreatyExpand
Detailed, Accurate, Human Shape Estimation from Clothed 3D Scan Sequences
TLDR
This work contributes a new approach to recover a personalized shape of the person by estimating body shape under clothing from a sequence of 3D scans, which outperforms the state of the art in both pose estimation and shape estimation, qualitatively and quantitatively. Expand
SMPL: a skinned multi-person linear model
TLDR
The Skinned Multi-Person Linear model (SMPL) is a skinned vertex-based model that accurately represents a wide variety of body shapes in natural human poses that is compatible with existing graphics pipelines and iscompatible with existing rendering engines. Expand
Dyna: a model of dynamic human shape in motion
TLDR
The Dyna model realistically represents the dynamics of soft tissue for previously unseen subjects and motions and provides tools for animators to modify the deformations and apply them to new stylized characters. Expand
Data-driven physics for human soft tissue animation
TLDR
The learned two layer model is a realistic full-body avatar that generalizes to novel motions and external forces and supports the retargeting of physical properties from one avatar when they share the same topology. Expand
DeepGarment : 3D Garment Shape Estimation from a Single Image
TLDR
This work illustrates that this technique is able to recover the global shape of dynamic 3D garments from a single image under varying factors such as challenging human poses, self occlusions, various camera poses and lighting conditions, at interactive rates. Expand
Dynamic FAUST: Registering Human Bodies in Motion
TLDR
This work proposes a new mesh registration method that uses both 3D geometry and texture information to register all scans in a sequence to a common reference topology, and shows how using geometry alone results in significant errors in alignment when the motions are fast and non-rigid. Expand
Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs
TLDR
This work addresses the problem of making human motion capture in the wild more practical by making use of a realistic statistical body model that includes anthropometric constraints and using a joint optimization framework to fit the model to orientation and acceleration measurements over multiple frames. Expand
Unite the People: Closing the Loop Between 3D and 2D Human Representations
TLDR
This work proposes a hybrid approach to 3D body model fits for multiple human pose datasets with an extended version of the recently introduced SMPLify method, and shows that UP-3D can be enhanced with these improved fits to grow in quantity and quality, which makes the system deployable on large scale. Expand
...
1
2
3
4
5
...