• Corpus ID: 219559021

Skinning a Parameterization of Three-Dimensional Space for Neural Network Cloth

  title={Skinning a Parameterization of Three-Dimensional Space for Neural Network Cloth},
  author={Jane Wu and Zhenglin Geng and Hui Zhou and Ronald Fedkiw},
We present a novel learning framework for cloth deformation by embedding virtual cloth into a tetrahedral mesh that parametrizes the volumetric region of air surrounding the underlying body. In order to maintain this volumetric parameterization during character animation, the tetrahedral mesh is constrained to follow the body surface as it deforms. We embed the cloth mesh vertices into this parameterization of three-dimensional space in order to automatically capture much of the nonlinear… 

Realistic Crying Simulation of 3D virtual human: A survey

The study succeeded in combining the FACS & SPH approaches in creating a system that can express and implement extreme facial animation expressions, and there are still areas that need to be furthered, such as modeling of muscle movement in certain expressions like crying.

Analytically Integratable Zero-restlength Springs for Capturing Dynamic Modes unrepresented by Quasistatic Neural Networks

This work demonstrates that the dynamic modes lost when using a QNN approximation can be captured with a quite simple (and decoupled) zero-restlength spring model, which can be integrated analytically (as opposed to numerically) and thus has no time-step stability restrictions.



A Pixel‐Based Framework for Data‐Driven Clothing

We propose a novel approach to learning cloth deformation as a function of body pose, recasting the graph‐like triangle mesh data structure into image‐based data in order to leverage popular and

A Layered Model of Human Body and Garment Deformation

The proposed deformation model provides intuitive control over the three parameters independently, while producing aesthetically pleasing deformations of both the garment and the human body.

Spherical blend skinning: a real-time deformation of articulated models

A new algorithm is presented which removes shortcomings of the most widely used skeletal animation algorithm while maintaining almost the same time and memory complexity as the linear blend skinning, and minimizes the cost of upgrade from linear to spherical blendskinning in many existing applications.

Analyzing Clothing Layer Deformation Statistics of 3D Human Motions

It is shown that this model not only allows to reproduce previous retargeting works, but generalizes the data generation capabilities to other semantic parameters such as clothing variation and size, or physical material parameters with synthetically generated training sequence, paving the way for many kinds of capture data-driven creation and augmentation applications.

SMPL: a skinned multi-person linear model

The Skinned Multi-Person Linear model (SMPL) is a skinned vertex-based model that accurately represents a wide variety of body shapes in natural human poses that is compatible with existing graphics pipelines and iscompatible with existing rendering engines.

SiCloPe: Silhouette-Based Clothed People

We introduce a new silhouette-based representation for modeling clothed human bodies using deep generative models. Our method can reconstruct a complete and textured 3D model of a person wearing

GarNet: A Two-Stream Network for Fast and Accurate 3D Cloth Draping

This work builds upon the recent progress in 3D point cloud processing with deep networks to extract garment features at varying levels of detail, including point-wise, patch-wise and global features and fuse these features with those extracted in parallel from the 3D body, so as to model the cloth-body interactions.

SimulCap : Single-View Human Performance Capture With Cloth Simulation

  • Tao YuZerong Zheng Yebin Liu
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
By incorporating cloth simulation into the performance capture pipeline, this system can simulate plausible cloth dynamics and cloth-body interactions even in the occluded regions, which was not possible in previous capture methods.

PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization

The proposed Pixel-aligned Implicit Function (PIFu), an implicit representation that locally aligns pixels of 2D images with the global context of their corresponding 3D object, achieves state-of-the-art performance on a public benchmark and outperforms the prior work for clothed human digitization from a single image.

Stable spaces for real-time clothing

This surprisingly simple conditional model learns and preserves the key dynamic properties of a cloth motion along with folding details and shows that within a class of methods, no simpler model covers the full range of cloth dynamics captured by the method.