• Corpus ID: 252917593

3DFaceShop: Explicitly Controllable 3D-Aware Portrait Generation

  title={3DFaceShop: Explicitly Controllable 3D-Aware Portrait Generation},
  author={Junshu Tang and Bo Zhang and Binxin Yang and Ting Zhang and Dong Chen and Lizhuang Ma and Fang Wen},
—In contrast to the traditional avatar creation pipeline which is a costly process, contemporary generative approaches directly learn the data distribution from photographs. While plenty of works extend unconditional generative models and achieve some levels of controllability, it is still challenging to ensure multi-view consistency, especially in large poses. In this work, we propose a network that generates 3D-aware portraits while being controllable according to semantic parameters… 



RigNeRF: Fully Controllable Neural 3D Portraits

  • ShahRukh Athar
  • Computer Science
    2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2022
This work proposes RigNeRF, a system that goes beyond just novel view synthesis and enables full control of head pose and facial expressions learned from a single portrait video and demonstrates the effectiveness of the method on free view synthesis of a portrait scene with explicitHead pose and expression controls.

FENeRF: Face Editing in Neural Radiance Fields

This work proposes FENeRF, a 3D-aware generator that can produce view-consistent and locally-editable portrait images and reveals that Joint learning semantics and texture helps to generate finer geometry.

Visual Object Networks: Image Generation with Disentangled 3D Representations

A new generative model, Visual Object Networks (VONs), synthesizing natural images of objects with a disentangled 3D representation that enables many 3D operations such as changing the viewpoint of a generated image, shape and texture editing, linear interpolation in texture and shape space, and transferring appearance across different objects and viewpoints.

PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering

The proposed Portrait Image Neural Renderer can generate photo-realistic portrait images with accurate movements according to intuitive modifications and is extended to tackle the audio-driven facial reenactment task by extracting sequential motions from audio inputs.

Lifting 2D StyleGAN for 3D-Aware Face Generation

Qualitative and quantitative results show the superiority of the approach over existing methods on 3D-controllable GANs in content controllability while generating realistic high quality images.

GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis

This paper proposes a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene, and introduces a multi-scale patch-based discriminator to demonstrate synthesis of high-resolution images while training the model from unposed 2D images alone.

IDE-3D: Interactive Disentangled Editing for High-Resolution 3D-aware Portrait Synthesis

This work’s 3D portrait image generator allows users to perform interactive global and local editing on shape and texture in a view-consistent way and learns high-quality geometry from a collection of 2D images without multi-view supervision.

GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields

The key hypothesis is that incorporating a compositional 3D scene representation into the generative model leads to more controllable image synthesis and a fast and realistic image synthesis model is proposed.

Cross-Domain and Disentangled Face Manipulation with 3D Guidance

This work proposes the first method to manipulate faces in arbitrary domains using human 3DMM using a pre-trained StyleGAN2 that guarantees disentangled and precise controls for each semantic attribute and develops an intuitive editing interface to support user-friendly control and instant feedback.

Real-Time Neural Character Rendering with Pose-Guided Multiplane Images

This work proposes pose-guided multiplane image (MPI) synthesis which can render an animatable character in real scenes with photorealistic quality and demonstrates advanta-geous novel-view synthesis quality over the state-of-the-art approaches for characters with challenging motions.