Corpus ID: 235313531

Neural Actor: Neural Free-view Synthesis of Human Actors with Pose Control

@article{Liu2021NeuralAN,
  title={Neural Actor: Neural Free-view Synthesis of Human Actors with Pose Control},
  author={Lingjie Liu and Marc Habermann and V. Rudnev and Kripasindhu Sarkar and Jiatao Gu and C. Theobalt},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.02019}
}
We propose Neural Actor (NA), a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses. Our method is built upon recent neural scene representation and rendering works which learn representations of geometry and appearance from only 2D images. While existing works demonstrated compelling rendering of static scenes and playback of dynamic scenes, photo-realistic reconstruction and rendering of humans with neural implicit methods, in… Expand
Animatable Neural Radiance Fields from Monocular RGB Video
TLDR
The approach extends neural radiance fields (NeRF) to the dynamic scenes with human movements via introducing explicit pose-guided deformation while learning the scene representation network to compensate for inaccurate pose estimation. Expand
Dynamic Surface Function Networks for Clothed Human Bodies
TLDR
A novel method for temporal coherent reconstruction and tracking of clothed humans using a multi-layer perceptron (MLP) which is embedded into the canonical space of the SMPL body model and can be learned in a self-supervised fashion using the principle of analysisby-synthesis and differentiable rasterization. Expand
Neural Rays for Occlusion-aware Image-based Rendering
  • Yuan Liu, Sida Peng, +5 authors Wenping Wang
  • Computer Science
  • ArXiv
  • 2021
TLDR
This work proposes a novel neural ray representation for the novel view synthesis task and shows how this representation can be refined by training it on the scene to achieve better renderings with only a few training steps. Expand
The Power of Points for Modeling Humans in Clothing
We use the SMPL [10] (for CAPE data) and SMPLX [20] (for ReSynth data) UV maps of 128 Γ— 128 Γ— 3 resolution as pose input, where each pixel is encoded into 64 channels by the pose encoder. The pose… Expand
SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural Implicit Shapes
TLDR
SNARF is introduced, which combines the advantages of linear blend skinning for polygonal meshes with those of neural implicit surfaces by learning a forward deformation field without direct supervision, allowing for generalization to unseen poses. Expand

References

SHOWING 1-10 OF 71 REFERENCES
Vid2Actor: Free-viewpoint Animatable Person Synthesis from Video in the Wild
Given an β€œin-the-wild” video of a person, we reconstruct an animatable model of the person in the video. The output model can be rendered in any body pose to any camera view, via the learned… Expand
Neural Re-rendering of Humans from a Single Image
TLDR
This work proposes a new method for neural re-rendering of a human under a novel user-defined pose and viewpoint, given one input image, that represents body pose and shape as a parametric mesh which can be reconstructed from a single image and easily reposed. Expand
Real-time deep dynamic characters
TLDR
This work proposes a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance learned in a new weakly supervised way from multi-view imagery, and uses a novel graph convolutional network architecture to enable motion-dependent deformation learning of body and clothing, including dynamics. Expand
High-Fidelity Neural Human Motion Transfer from Monocular Video
TLDR
A new framework which performs high-fidelity and temporally-consistent human motion transfer with natural pose-dependent non-rigid deformations, for several types of loose garments and significantly outperform the state-of-the-art in terms of video realism. Expand
Unsupervised Person Image Synthesis in Arbitrary Poses
TLDR
A novel approach for synthesizing photorealistic images of people in arbitrary poses using generative adversarial learning, which considers a pose conditioned bidirectional generator that maps back the initially rendered image to the original pose, hence being directly comparable to the input image without the need to resort to any training image. Expand
Dense Pose Transfer
TLDR
This work proposes a combination of surface-based pose estimation and deep generative models that allows us to perform accurate pose transfer, i.e. synthesize a new image of a person based on a single image of that person and theimage of a pose donor. Expand
Neural Rendering and Reenactment of Human Actor Videos
TLDR
The proposed method for generating video-realistic animations of real humans under user control relies on a video sequence in conjunction with a (medium-quality) controllable 3D template model of the person to generate a synthetically rendered version of the video. Expand
Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis
TLDR
A 3D body mesh recovery module is proposed to disentangle the pose and shape, which can not only model the joint location and rotation but also characterize the personalized body shape and is able to support a more flexible warping from multiple sources. Expand
ARCH: Animatable Reconstruction of Clothed Humans
TLDR
This paper proposes ARCH (Animatable Reconstruction of Clothed Humans), a novel end-to-end framework for accurate reconstruction of animation-ready 3D clothed humans from a monocular image and shows numerous qualitative examples of animated, high-quality reconstructed avatars unseen in the literature so far. Expand
Multi-View Neural Human Rendering
TLDR
Comprehensive experiments show NHR significantly outperforms the state-of-the-art neural and image-based rendering techniques, especially on hands, hair, nose, foot, etc. Expand
...
1
2
3
4
5
...