Editing Conditional Radiance Fields

@article{Liu2021EditingCR,
  title={Editing Conditional Radiance Fields},
  author={Steven Liu and Xiuming Zhang and Zhoutong Zhang and Richard Zhang and Junyan Zhu and Bryan C. Russell},
  journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021},
  pages={5753-5763}
}
A neural radiance field (NeRF) is a scene model supporting high-quality view synthesis, optimized per scene. In this paper, we explore enabling user editing of a category-level NeRF – also known as a conditional radiance field – trained on a shape category. Specifically, we introduce a method for propagating coarse 2D user scribbles to the 3D space, to modify the color or shape of a local region. First, we propose a conditional radiance field that incorporates new modular network components… 

Decomposing NeRF for Editing via Feature Field Distillation

This work tackles the problem of semantic scene decomposition of NeRFs to enable query-based local editing of the represented 3D scenes, and distill the knowledge of off-the-shelf, self-supervised 2D image feature extractors into a 3D feature field optimized in parallel to the radiance field.

NeRF-In: Free-Form NeRF Inpainting with RGB-D Priors

This paper introduces the first framework that enables users to remove unwanted objects or retouch undesired regions in a 3D scene represented by a pre-trained NeRF without any category-specific data and training.

NeRF-Editing: Geometry Editing of Neural Radiance Fields

This paper proposes a method that allows users to perform controllable shape deformation on the implicit representation of the scene, and synthesizes the novel view images of the edited scene without re-training the network.

NeRFReN: Neural Radiance Fields with Reflections

This work proposes to split a scene into transmitted and reflected components, and model the two components with separate neural radiance fields, and proposes to exploit geometric priors and apply carefully-designed training strategies to achieve reasonable decomposition results.

DM-NeRF: 3D Scene Geometry Decomposition and Manipulation from 2D Images

The DM-NeRF method is among the first to simultaneously reconstruct, decompose, manipulate and render complex 3D scenes in a single pipeline, allowing any interested object to be freely manipulated in 3D space such as translation, rotation, size adjustment, and deformation.

AE-NeRF: Auto-Encoding Neural Radiance Fields for 3D-Aware Object Manipulation

—We propose a novel framework for 3D-aware object manipulation, called Auto-Encoding Neural Radiance Fields (AE-NeRF). Our model, which is formulated in an auto-encoder architecture, extracts

Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation

A novel method for performing 2D, 3D-aware image content manipulation while enabling high-quality novel view synthesis and mixing deforming objects and inserting into while producing photo-realistic results is presented.

Depth-supervised NeRF: Fewer Views and Faster Training for Free

This work formalizes the above assumption through DS-NeRF (Depth-supervised Neural Radiance Fields), a loss for learning radiance fields that takes advantage of readily-available depth supervision and can render better images given fewer training views while training 2-3x faster.

Generative Deformable Radiance Fields for Disentangled Image Synthesis of Topology-Varying Objects

This paper proposes a generative model for synthesizing radiance of topology-varying objects with disentangled shape and appearance variations, and generates deformable radiance fields, which builds the dense correspondence between the density and appearance of the objects and encodes their appearances in a shared template.

CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields

This work introduces a disentangled conditional NeRF architecture that allows individual control over both shape and appearance and proposes an inverse optimization method that accurately projects an input image to the latent codes for manipulation to enable editing on real images.

References

SHOWING 1-10 OF 81 REFERENCES

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.

pixelNeRF: Neural Radiance Fields from One or Few Images

We propose pixelNeRF, a learning framework that predicts a continuous neural scene representation conditioned on one or few input images. The existing approach for constructing neural radiance fields

GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis

This paper proposes a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene, and introduces a multi-scale patch-based discriminator to demonstrate synthesis of high-resolution images while training the model from unposed 2D images alone.

NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections

A learning-based method for synthesizing novel views of complex scenes using only unstructured collections of in-the-wild photographs, and applies it to internet photo collections of famous landmarks, to demonstrate temporally consistent novel view renderings that are significantly closer to photorealism than the prior state of the art.

Free View Synthesis

This work presents a method for novel view synthesis from input images that are freely distributed around a scene that can synthesize images for free camera movement through the scene, and works for general scenes with unconstrained geometric layouts.

AppProp: all-pairs appearance-space edit propagation

This work presents an intuitive and efficient method for editing the appearance of complex spatially-varying datasets, such as images and measured materials that generalizes prior methods while providing significant improvements in generality, robustness and efficiency.

Transferring image-based edits for multi-channel compositing

A transfer algorithm is presented that extends the image analogies formulation to include an augmented set of photometric and non-photometric guidance channels and adaptively estimate weights for the various candidate channels in a way that matches the characteristics of each individual edit.

Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations

The proposed Scene Representation Networks (SRNs), a continuous, 3D-structure-aware scene representation that encodes both geometry and appearance, are demonstrated by evaluating them for novel view synthesis, few-shot reconstruction, joint shape and appearance interpolation, and unsupervised discovery of a non-rigid face model.

PlenOctrees for Real-time Rendering of Neural Radiance Fields

It is shown that it is possible to train NeRFs to predict a spherical harmonic representation of radiance, removing the viewing direction as an input to the neural network, and PlenOctrees can be directly optimized to further minimize the reconstruction loss, which leads to equal or better quality compared to competing methods.

3D Sketching using Multi-View Deep Volumetric Prediction

This work combines a deep convolutional neural network that predicts occupancy of a voxel grid from a line drawing with an updater CNN that refines an existing prediction given a new drawing of the shape created from a novel viewpoint to reconstruct 3D shapes from one or more drawings.
...