• Corpus ID: 245853751

HumanNeRF: Free-viewpoint Rendering of Moving People from Monocular Video

@article{Weng2022HumanNeRFFR,
  title={HumanNeRF: Free-viewpoint Rendering of Moving People from Monocular Video},
  author={Chung-Yi Weng and Brian Curless and Pratul P. Srinivasan and Jonathan T. Barron and Ira Kemelmacher-Shlizerman},
  journal={ArXiv},
  year={2022},
  volume={abs/2201.04127}
}
We introduce a free-viewpoint rendering method – HumanNeRF – that works on a given monocular video of a human performing complex body motions, e.g. a video from YouTube. Our method enables pausing the video at any frame and rendering the subject from arbitrary new camera viewpoints or even a full 360-degree camera path for that particular frame and body pose. This task is particularly challenging, as it requires synthesizing photorealis-tic details of the body, as seen from various camera… 
NeuMan: Neural Human Radiance Field from a Single Video
TLDR
The method is able to learn subject specific details, including cloth wrinkles and accessories, from just a 10 seconds video clip, and to provide high quality renderings of the human under novel poses, from novel views, together with the background.
Real-Time Neural Character Rendering with Pose-Guided Multiplane Images
TLDR
This work proposes pose-guided multiplane image (MPI) synthesis which can render an animatable character in real scenes with photorealistic quality and demonstrates advanta-geous novel-view synthesis quality over the state-of-the-art approaches for characters with challenging motions.
UV Volumes for Real-time Rendering of Editable Free-view Human Performance
TLDR
This model can render 960 × 540 images in 30FPS on average with comparable photo-realism to state-of-the-art methods, and the use of NTS enables interesting applications, e.g., retexturing.
TAVA: Template-free Animatable Volumetric Actors
TLDR
This paper proposes TAVA, a method to create Template-free Animatable V olumetric Actors, based on neural representations that relies solely on multi-view data and a tracked skeleton to create a volumetric model of an actor, which can be animated at the test time given novel pose.
KeypointNeRF: Generalizing Image-based Volumetric Avatars using Relative Spatial Encoding of Keypoints
TLDR
This work investigates common issues with existing spatial encodings and proposes a simple yet highly effective approach to modeling high-fidelity volumetric avatars from sparse views to encode relative spatial 3D information via sparse 3D keypoints, robust to the sparsity of viewpoints and cross-dataset domain gap.
Advances in Neural Rendering
TLDR
This state‐of‐the‐art report on advances in neural rendering focuses on methods that combine classical rendering principles with learned 3D scene representations, often now referred to as neural scene representations.
DeVRF: Fast Deformable Voxel Radiance Fields for Dynamic Scenes
TLDR
Experiments demonstrate that DeVRF achieves two orders of magnitude speedup ( 100× faster ) with on-par high-fidelity results compared to the previous state-of-the-art approaches.
Advances in neural rendering
TLDR
Loss functions for Neural Rendering Jun-Yan Zhu shows the importance of knowing the number of neurons in the system and how many neurons are firing at the same time.