DRaCoN - Differentiable Rasterization Conditioned Neural Radiance Fields for Articulated Avatars
@article{Raj2022DRaCoND, title={DRaCoN - Differentiable Rasterization Conditioned Neural Radiance Fields for Articulated Avatars}, author={Amit Raj and Umar Iqbal and Koki Nagano and S. Khamis and Pavlo Molchanov and James Hays and Jan Kautz}, journal={ArXiv}, year={2022}, volume={abs/2203.15798} }
Acquisition and creation of digital human avatars is an important problem with applications to virtual telep-resence, gaming, and human modeling. Most contempo-rary approaches for avatar generation can be viewed ei-ther as 3D-based methods, which use multi-view data to learn a 3D representation with appearance (such as a mesh, implicit surface, or volume), or 2D-based methods which learn photo-realistic renderings of avatars but lack accu-rate 3D representations. In this work, we present…
One Citation
Animatable Implicit Neural Representations for Creating Realistic Avatars from Videos
- Computer Science
- 2022
A pose-driven deformation based on the linear blend skinning algorithm, which combines the blend weight and the 3D human skeleton to produce observation-to-canonical correspondences, which outperforms recent human modeling methods.
References
SHOWING 1-10 OF 52 REFERENCES
ANR: Articulated Neural Rendering for Virtual Avatars
- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021
This work presents Articulated Neural Rendering (ANR), a novel framework based on DNR which explicitly addresses its limitations for virtual human avatars and shows the superiority of ANR not only with respect to DNR but also with methods specialized for avatar creation and animation.
SMPLpix: Neural Avatars from 3D Human Models
- Computer Science2021 IEEE Winter Conference on Applications of Computer Vision (WACV)
- 2021
This work trains a network that directly converts a sparse set of 3D mesh vertices into photorealistic images, alleviating the need for traditional rasterization mechanism and shows the advantage over conventional differentiable renderers both in terms of the level of photorealism and rendering efficiency.
Textured Neural Avatars
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
A system for learning full body neural avatars, i.e. deep networks that produce full body renderings of a person for varying body pose and varying camera pose, that is capable of learning to generate realistic renderings while being trained on videos annotated with 3D poses and foreground masks is presented.
paGAN: real-time avatars using dynamic textures
- Computer ScienceACM Trans. Graph.
- 2018
This work produces state-of-the-art quality image and video synthesis, and is the first to the knowledge that is able to generate a dynamically textured avatar with a mouth interior, all from a single image.
ARCH: Animatable Reconstruction of Clothed Humans
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
This paper proposes ARCH (Animatable Reconstruction of Clothed Humans), a novel end-to-end framework for accurate reconstruction of animation-ready 3D clothed humans from a monocular image and shows numerous qualitative examples of animated, high-quality reconstructed avatars unseen in the literature so far.
Animatable Neural Radiance Fields for Modeling Dynamic Human Bodies
- Computer Science2021 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2021
This paper addresses the challenge of reconstructing an animatable human model from a multi-view video by introducing neural blend weight fields to produce the deformation fields and shows that this approach significantly outperforms recent human synthesis methods.
SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks
- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021
SCANimate is presented, an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar that is driven by pose parameters and has realistic clothing that moves and deforms naturally.
PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization
- Computer Science2019 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2019
The proposed Pixel-aligned Implicit Function (PIFu), an implicit representation that locally aligns pixels of 2D images with the global context of their corresponding 3D object, achieves state-of-the-art performance on a public benchmark and outperforms the prior work for clothed human digitization from a single image.
NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction
- Computer ScienceNeurIPS
- 2021
Experiments show that NeuS outperforms the state-of-the-arts in high-quality surface reconstruction, especially for objects and scenes with complex structures and self-occlusion, even for highly complex objects.
S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling
- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021
This work represents the pedestrian’s shape, pose and skinning weights as neural implicit functions that are directly learned from data, allowing it to handle a wide variety of different pedestrian shapes and poses without explicitly fitting a human parametric body model.