• Corpus ID: 245117349

HeadNeRF: A Real-time NeRF-based Parametric Head Model

  title={HeadNeRF: A Real-time NeRF-based Parametric Head Model},
  author={Yang Hong and Bo Peng and Haiyao Xiao and Ligang Liu and Juyong Zhang},
In this paper, we propose HeadNeRF, a novel NeRF-based parametric head model that integrates the neural radiance field to the parametric representation of the human head. It can render high fidelity head images in real-time on modern GPUs, and supports directly controlling the generated images’ rendering pose and various semantic attributes. Different from existing related parametric models, we use the neural radiance fields as a novel 3D proxy instead of the traditional 3D textured mesh, which… 
Local anatomically-constrained facial performance retargeting
This work presents a new method for high-fidelity offline facial performance retargeting that is neither expensive nor artifact-prone, and demonstrates the superior performance of the method over traditional deformation transfer algorithms, while achieving a quality comparable to current blendshape-based techniques used in production while requiring significantly fewer input shapes at setup time.
Advances in Neural Rendering
This state‐of‐the‐art report on advances in neural rendering focuses on methods that combine classical rendering principles with learned 3D scene representations, often now referred to as neural scene representations.
Neural Surface Reconstruction of Dynamic Scenes with Monocular RGB-D Camera
This work proposes Neural-DynamicReconstruction (NDR), a template-free method to recover high-fidelity geometry and motions of a dynamic scene from a monocular RGB-D camera that outperforms existing monocular dynamic reconstruction methods.
Controllable 3D Face Synthesis with Conditional Generative Occupancy Fields
A new NeRF-based conditional 3D face synthesis framework is proposed, which enables 3D controllability over the generated face images by imposing explicit 3D conditions from3D face priors and effectively enforces the shape of thegenerated face to commit to a given 3D Morphable Model (3DMM) mesh.
Neural Parameterization for Dynamic Human Head Editing
A hybrid 2D texture consisting of an explicit texture map for easy editing and implicit view and time-dependent residuals to model temporal and view variations is developed.
EventNeRF: Neural Radiance Fields from a Single Colour Event Camera
It is demonstrated that it is possible to learn NeRF suitable for novel-view synthesis in the RGB space from asynchronous event streams, and these models achieve high visual accuracy of the rendered novel views of challenging scenes in theRGB space, despite being trained with substantially fewer data.
Generative Neural Articulated Radiance Fields
This work develops a 3D GAN framework that learns to generate radiance of human bodies or faces in a canonical pose and warp them using an explicit deformation into a desired body pose or facial expression and demonstrates the first high-quality radiance generation results for human bodies.
VoLux-GAN: A Generative Model for 3D Face Synthesis with HDRI Relighting
VoLux-GAN is a generative framework to synthesize 3D-aware faces with convincing relighting using a volumetric HDRI relighting method that can efficiently accumulate albedo, diffuse and specular lighting contributions along each 3D ray for any desired HDR environmental map.
CelebV-HQ: A Large-Scale Video Facial Attributes Dataset
This work proposes a large-scale, high-quality, and diverse video dataset with rich facial attribute annotations, named the High-Quality Celebrity Video Dataset (CelebV-HQ), and conducts a comprehensive analysis in terms of age, ethnicity, brightness stability, motion smoothness, head pose diversity, and data quality.
SD-GAN: Semantic Decomposition for Face Image Synthesis with Discrete Attribute
An innovative framework to tackle challenging facial discrete attribute synthesis via semantic decomposing, dubbed SD-GAN, and the combination of prior basis and offset latent representation enable the method to synthesize photo-realistic face images with discrete attributes.


Learning Compositional Radiance Fields of Dynamic Human Heads
This work proposes a novel compositional 3D representation that combines the best of previous methods to produce both higher-resolution and faster results and shows that the learned dynamic radiance field can be used to synthesize novel unseen expressions based on a global animation code.
Pixel-aligned Volumetric Avatars
This paper devise a novel approach for predicting volumetric avatars of the human head given just a small number of inputs that enables generalization across identities by a novel parameterization that combines neural radiance fields with local, pixel-aligned features extracted directly from the inputs, thus side-stepping the need for very deep or complex networks.
FastNeRF: High-Fidelity Neural Rendering at 200FPS
FastNeRF is proposed, the first NeRF-based system capable of rendering high fidelity photorealistic images at 200Hz on a high-end consumer GPU and at least an order of magnitude faster than existing work on accelerating NeRF, while maintaining visual quality and extensibility.
Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar Reconstruction
This work combines a scene representation network with a low-dimensional morphable model which provides explicit control over pose and expressions and shows that this learned volumetric representation allows for photorealistic image generation that surpasses the quality of state-of-the-art video-based reenactment methods.
PlenOctrees for Real-time Rendering of Neural Radiance Fields
It is shown that it is possible to train NeRFs to predict a spherical harmonic representation of radiance, removing the viewing direction as an input to the neural network, and PlenOctrees can be directly optimized to further minimize the reconstruction loss, which leads to equal or better quality compared to competing methods.
Neural 3D Mesh Renderer
This work proposes an approximate gradient for rasterization that enables the integration of rendering into neural networks and performs gradient-based 3D mesh editing operations, such as 2D-to-3D style transfer and 3D DeepDream, with 2D supervision for the first time.
Prior-Guided Multi-View 3D Head Reconstruction
This paper model the head geometry with a learnable signed distance field (SDF) and optimize it via an implicit differentiable renderer with the guidance of some human head priors, including the facial prior knowledge, head semantic segmentation information and 2D hair orientation maps, leading to a high-quality integrated 3D head model.
AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis
Experimental results demonstrate that the novel framework can produce high-fidelity and natural results, and support free adjustment of audio signals, viewing directions, and background images.
Pulsar: Efficient Sphere-based Neural Rendering
Pulsar is an efficient sphere-based differentiable rendering module that is orders of magnitude faster than competing techniques, modular, and easy-to-use due to its tight integration with PyTorch, and enables a plethora of applications, ranging from 3D reconstruction to neural rendering.
Fast Training of Neural Lumigraph Representations using Meta Learning
This work develops a new neural rendering approach with the goal of quickly learning a high-quality representation which can also be rendered in real-time, and achieves similar or better novel view synthesis results in a fraction of the time that competing methods require.