Corpus ID: 211678344

CALVIS: chest, waist and pelvis circumference from 3D human body meshes as ground truth for deep learning

@article{GonzalezTejeda2020CALVISCW,
  title={CALVIS: chest, waist and pelvis circumference from 3D human body meshes as ground truth for deep learning},
  author={Yansel Gonzalez Tejeda and Helmut Mayer},
  journal={ArXiv},
  year={2020},
  volume={abs/2003.00834}
}
In this paper we present CALVIS, a method to calculate $\textbf{C}$hest, w$\textbf{A}$ist and pe$\textbf{LVIS}$ circumference from 3D human body meshes. Our motivation is to use this data as ground truth for training convolutional neural networks (CNN). Previous work had used the large scale CAESAR dataset or determined these anthropometrical measurements $\textit{manually}$ from a person or human 3D body meshes. Unfortunately, acquiring these data is a cost and time consuming endeavor. In… Expand
A Neural Anthropometer Learning from Body Dimensions Computed on Human 3D Meshes
TLDR
A method to calculate right and left arm length, shoulder width, and inseam (crotch height) from 3D meshes with focus on potential medical, virtual try-on and distance tailoring applications and enabling the community with a valuable method. Expand

References

SHOWING 1-10 OF 30 REFERENCES
HS-Nets: Estimating Human Body Shape from Silhouettes with Convolutional Neural Networks
TLDR
This work trains CNNs to learn a global mapping from the input to shape parameters used to reconstruct the shapes of people, in neutral poses, with the application of garment fitting in mind, resulting in an accurate, robust and automatic system. Expand
End-to-End Recovery of Human Shape and Pose
TLDR
This work introduces an adversary trained to tell whether human body shape and pose parameters are real or not using a large database of 3D human meshes, and produces a richer and more useful mesh representation that is parameterized by shape and 3D joint angles. Expand
BodyNet: Volumetric Inference of 3D Human Body Shapes
TLDR
BodyNet is an end-to-end trainable network that benefits from a volumetric 3D loss, a multi-view re-projection loss, and intermediate supervision of 2D pose, 2D body part segmentation, and 3D pose and achieves state-of-the-art performance. Expand
Keep It SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image
TLDR
The first method to automatically estimate the 3D pose of the human body as well as its 3D shape from a single unconstrained image is described, showing superior pose accuracy with respect to the state of the art. Expand
Unite the People: Closing the Loop Between 3D and 2D Human Representations
TLDR
This work proposes a hybrid approach to 3D body model fits for multiple human pose datasets with an extended version of the recently introduced SMPLify method, and shows that UP-3D can be enhanced with these improved fits to grow in quantity and quality, which makes the system deployable on large scale. Expand
Learning from Synthetic Humans
TLDR
This work presents SURREAL (Synthetic hUmans foR REAL tasks): a new large-scale dataset with synthetically-generated but realistic images of people rendered from 3D sequences of human motion capture data and shows that CNNs trained on this synthetic dataset allow for accurate human depth estimation and human part segmentation in real RGB images. Expand
Three-dimensional human shape inference from silhouettes: reconstruction and validation
TLDR
A method that integrates both geometric and statistical priors to reconstruct the shape of a subject assuming a standardized posture from a frontal and a lateral silhouette and shows a mean absolute 3D error of 8 mm with ideal silhouettes extraction. Expand
Shape from Selfies: Human Body Shape Estimation Using CCA Regression Forests
TLDR
This work describes a novel approach to automatically estimate shape parameters from a single input shape silhouette using semi-supervised learning and shows how regression forests can be used to compute an accurate mapping from the silhouette to the shape parameter space. Expand
Total Capture: A 3D Deformation Model for Tracking Faces, Hands, and Bodies
TLDR
A unified deformation model is presented for the markerless capture of human movement at multiple scales, including facial expressions, body motion, and hand gestures, which enables the full expression of part movements by a single seamless model. Expand
Estimating 3D human shapes from measurements
TLDR
This paper introduces a technique that extrapolates the statistically inferred shape to fit the measurement data using non-linear optimization and ensures that the generated shape is both human-like and satisfies the measurement conditions. Expand
...
1
2
3
...