• Corpus ID: 245123601

HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair Performance Capture

@article{Wang2021HVHLA,
  title={HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair Performance Capture},
  author={Ziyan Wang and Giljoo Nam and Tuur Stuyck and Stephen Lombardi and Michael Zollhoefer and Jessica Hodgins and Christoph Lassner},
  journal={ArXiv},
  year={2021},
  volume={abs/2112.06904}
}
Capturing and rendering life-like hair is particularly challenging due to its fine geometric structure, the complex physical interaction and its non-trivial visual appearance. Yet, hair is a critical component for believable avatars. In this paper, we address the aforementioned problems: 1) we use a novel, volumetric hair representation that is composed of thousands of primitives. Each primitive can be rendered efficiently, yet realistically, by building on the latest advances in neural… 

References

SHOWING 1-10 OF 78 REFERENCES
Mixture of volumetric primitives for efficient neural rendering
TLDR
Mixture of Volumetric Primitives (MVP), a representation for rendering dynamic 3D content that combines the completeness of volumetric representations with the efficiency of primitive-based rendering, is presented.
Learning Compositional Radiance Fields of Dynamic Human Heads
TLDR
This work proposes a novel compositional 3D representation that combines the best of previous methods to produce both higher-resolution and faster results and shows that the learned dynamic radiance field can be used to synthesize novel unseen expressions based on a global animation code.
Human Hair Inverse Rendering using Multi-View Photometric data
We introduce a hair inverse rendering framework to reconstruct high-fidelity 3D geometry of human hair, as well as its reflectance, which can be readily used for photorealistic rendering of hair. We
Deep appearance models for face rendering
TLDR
A data-driven rendering pipeline that learns a joint representation of facial geometry and appearance from a multiview capture setup and a novel unsupervised technique for mapping images to facial states results in a system that is naturally suited to real-time interactive settings such as Virtual Reality (VR).
Multi-view hair capture using orientation fields
TLDR
A multi-view hair reconstruction algorithm based on orientation fields with structure-aware aggregation that faithfully reconstructs detailed hair structures and is suitable for capturing hair in motion.
Hair photobooth: geometric and photometric acquisition of real hairstyles
TLDR
A new reflectance interpolation technique is introduced that leverages an analytical reflectance model to alleviate cross-fading artifacts caused by linear methods and closely match the real hairstyles and can be used for animation.
Capture of hair geometry from multiple images
TLDR
An image-based approach to capture the geometry of hair by drawing information from the scattering properties of the hair that are normally considered a hindrance and paves the way for a new approach to digital hair generation.
Volumetric Methods for Simulation and Rendering of Hair
TLDR
This paper builds on the existing approaches to illumination and simulation by intr oducing a volumetric representation of hair which allows them to effici ently model collective properties of hair.
Strand-Accurate Multi-View Hair Capture
TLDR
This paper presents the first method to capture high-fidelity hair geometry with strand-level accuracy and evaluates the method on both synthetic data and real captured data, showing that it can reconstruct hair strands with sub-millimeter accuracy.
PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization
TLDR
The proposed Pixel-aligned Implicit Function (PIFu), an implicit representation that locally aligns pixels of 2D images with the global context of their corresponding 3D object, achieves state-of-the-art performance on a public benchmark and outperforms the prior work for clothed human digitization from a single image.
...
...