DRaCoN - Differentiable Rasterization Conditioned Neural Radiance Fields for Articulated Avatars

@article{Raj2022DRaCoND,
  title={DRaCoN - Differentiable Rasterization Conditioned Neural Radiance Fields for Articulated Avatars},
  author={Amit Raj and Umar Iqbal and Koki Nagano and S. Khamis and Pavlo Molchanov and James Hays and Jan Kautz},
  journal={ArXiv},
  year={2022},
  volume={abs/2203.15798}
}
  • Amit RajUmar Iqbal J. Kautz
  • Published 29 March 2022
  • Computer Science
  • ArXiv
Acquisition and creation of digital human avatars is an important problem with applications to virtual telep-resence, gaming, and human modeling. Most contempo-rary approaches for avatar generation can be viewed ei-ther as 3D-based methods, which use multi-view data to learn a 3D representation with appearance (such as a mesh, implicit surface, or volume), or 2D-based methods which learn photo-realistic renderings of avatars but lack accu-rate 3D representations. In this work, we present… 

Figures and Tables from this paper

Animatable Implicit Neural Representations for Creating Realistic Avatars from Videos

A pose-driven deformation based on the linear blend skinning algorithm, which combines the blend weight and the 3D human skeleton to produce observation-to-canonical correspondences, which outperforms recent human modeling methods.

References

SHOWING 1-10 OF 52 REFERENCES

ANR: Articulated Neural Rendering for Virtual Avatars

This work presents Articulated Neural Rendering (ANR), a novel framework based on DNR which explicitly addresses its limitations for virtual human avatars and shows the superiority of ANR not only with respect to DNR but also with methods specialized for avatar creation and animation.

SMPLpix: Neural Avatars from 3D Human Models

This work trains a network that directly converts a sparse set of 3D mesh vertices into photorealistic images, alleviating the need for traditional rasterization mechanism and shows the advantage over conventional differentiable renderers both in terms of the level of photorealism and rendering efficiency.

Textured Neural Avatars

A system for learning full body neural avatars, i.e. deep networks that produce full body renderings of a person for varying body pose and varying camera pose, that is capable of learning to generate realistic renderings while being trained on videos annotated with 3D poses and foreground masks is presented.

paGAN: real-time avatars using dynamic textures

This work produces state-of-the-art quality image and video synthesis, and is the first to the knowledge that is able to generate a dynamically textured avatar with a mouth interior, all from a single image.

ARCH: Animatable Reconstruction of Clothed Humans

This paper proposes ARCH (Animatable Reconstruction of Clothed Humans), a novel end-to-end framework for accurate reconstruction of animation-ready 3D clothed humans from a monocular image and shows numerous qualitative examples of animated, high-quality reconstructed avatars unseen in the literature so far.

Animatable Neural Radiance Fields for Modeling Dynamic Human Bodies

This paper addresses the challenge of reconstructing an animatable human model from a multi-view video by introducing neural blend weight fields to produce the deformation fields and shows that this approach significantly outperforms recent human synthesis methods.

SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks

SCANimate is presented, an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar that is driven by pose parameters and has realistic clothing that moves and deforms naturally.

PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization

The proposed Pixel-aligned Implicit Function (PIFu), an implicit representation that locally aligns pixels of 2D images with the global context of their corresponding 3D object, achieves state-of-the-art performance on a public benchmark and outperforms the prior work for clothed human digitization from a single image.

NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction

Experiments show that NeuS outperforms the state-of-the-arts in high-quality surface reconstruction, especially for objects and scenes with complex structures and self-occlusion, even for highly complex objects.

S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling

  • Ze YangShenlong Wang R. Urtasun
  • Computer Science
    2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2021
This work represents the pedestrian’s shape, pose and skinning weights as neural implicit functions that are directly learned from data, allowing it to handle a wide variety of different pedestrian shapes and poses without explicitly fitting a human parametric body model.
...