Controllable Radiance Fields for Dynamic Face Synthesis
@article{Zhuang2022ControllableRF, title={Controllable Radiance Fields for Dynamic Face Synthesis}, author={Peiye Zhuang and Liqian Ma and Oluwasanmi Koyejo and Alexander G. Schwing}, journal={ArXiv}, year={2022}, volume={abs/2210.05825} }
Recent work on 3D-aware image synthesis has achieved compelling results using advances in neural rendering. However, 3D-aware synthesis of face dynamics hasn’t re-ceived much attention. Here, we study how to explicitly control generative model synthesis of face dynamics exhibit-ing non-rigid motion (e.g., facial expression change), while simultaneously ensuring 3D-awareness. For this we propose a Controllable Radiance Field (CoRF): 1) Motion control is achieved by embedding motion features…
Figures and Tables from this paper
2 Citations
Next3D: Generative Neural Texture Rasterization for 3D-Aware Head Avatars
- Computer ScienceArXiv
- 2022
A novel 3D GAN framework for unsupervised learning of generative, high-quality and 3D-consistent facial avatars from unstructured 2D images is proposed and a 3D representation called Generative Texture-Rasterized Tri-planes is proposed to achieve both deformation accuracy and topological topological protection.
References
SHOWING 1-10 OF 73 REFERENCES
Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar Reconstruction
- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021
This work combines a scene representation network with a low-dimensional morphable model which provides explicit control over pose and expressions and shows that this learned volumetric representation allows for photorealistic image generation that surpasses the quality of state-of-the-art video-based reenactment methods.
GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis
- Computer ScienceNeurIPS
- 2020
This paper proposes a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene, and introduces a multi-scale patch-based discriminator to demonstrate synthesis of high-resolution images while training the model from unposed 2D images alone.
Warp-guided GANs for single-photo facial animation
- Computer ScienceACM Trans. Graph.
- 2018
This paper introduces a novel method for realtime portrait animation in a single photo that factorizes out the nonlinear geometric transformations exhibited in facial expressions by lightweight 2D warps and leaves the appearance detail synthesis to conditional generative neural networks for high-fidelity facial animation generation.
Learning an animatable detailed 3D face model from in-the-wild images
- Computer ScienceACM Transactions on Graphics
- 2021
This work presents the first approach that regresses 3D face shape and animatable details that are specific to an individual but change with expression, and introduces a novel detail-consistency loss that disentangles person-specific details from expression-dependent wrinkles.
StyleNeRF: A Style-based 3D-Aware Generator for High-resolution Image Synthesis
- Computer ScienceICLR
- 2022
StyleNeRF is a 3D-aware generative model for photo-realistic high-resolution image synthesis with high multi-view consistency and enables control of camera poses and different levels of styles, which can generalize to unseen views and supports challenging tasks, including style mixing and semantic editing.
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
- Computer ScienceECCV
- 2020
This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.
Generative Multiplane Images: Making a 2D GAN 3D-Aware
- Computer ScienceECCV
- 2022
This work modifies a classical GAN, i.e .
Transformation-Grounded Image Generation Network for Novel 3D View Synthesis
- Computer Science2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2017
We present a transformation-grounded image generation network for novel 3D view synthesis from a single image. Our approach first explicitly infers the parts of the geometry visible both in the input…
RenderNet: A deep convolutional network for differentiable rendering from 3D shapes
- Computer ScienceNeurIPS
- 2018
RenderNet is presented, a differentiable rendering convolutional network with a novel projection unit that can render 2D images from 3D shapes with high performance and can be used in inverse rendering tasks to estimate shape, pose, lighting and texture from a single image.
paGAN: real-time avatars using dynamic textures
- Computer ScienceACM Trans. Graph.
- 2018
This work produces state-of-the-art quality image and video synthesis, and is the first to the knowledge that is able to generate a dynamically textured avatar with a mouth interior, all from a single image.