Corpus ID: 237421130

Learning Object-Compositional Neural Radiance Field for Editable Scene Rendering

@article{Yang2021LearningON,
  title={Learning Object-Compositional Neural Radiance Field for Editable Scene Rendering},
  author={Bangbang Yang and Yinda Zhang and Yinghao Xu and Yijin Li and Han Zhou and Hujun Bao and Guofeng Zhang and Zhaopeng Cui},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.01847}
}
  • Bangbang Yang, Yinda Zhang, +5 authors Zhaopeng Cui
  • Published 4 September 2021
  • Computer Science
  • ArXiv
Implicit neural rendering techniques have shown promising results for novel view synthesis. However, existing methods usually encode the entire scene as a whole, which is generally not aware of the object identity and limits the ability to the high-level editing tasks such as moving or adding furniture. In this paper, we present a novel neural scene rendering system, which learns an objectcompositional neural radiance field and produces realistic rendering with editing capability for a… Expand
CIPS-3D: A 3D-Aware Generator of GANs Based on Conditionally-Independent Pixel Synthesis
  • Peng Zhou, Lingxi Xie, Bingbing Ni, Qi Tian
  • Computer Science, Engineering
  • ArXiv
  • 2021
TLDR
CIPS-3D is presented, a style-based, 3D-aware generator that is composed of a shallow NeRF network and a deep implicit neural representation (INR) network that synthesizes each pixel value independently without any spatial convolution or upsampling operation. Expand
HDR-NeRF: High Dynamic Range Neural Radiance Fields
We present High Dynamic Range Neural Radiance Fields (HDR-NeRF) to recover an HDR radiance field from a set of low dynamic range (LDR) views with different exposures. Using the HDR-NeRF, we are ableExpand
NeRFReN: Neural Radiance Fields with Reflections
  • Yuan-Chen Guo, Di Kang, Linchao Bao, Yu He, Song-Hai Zhang
  • Computer Science
  • ArXiv
  • 2021
Neural Radiance Fields (NeRF) has achieved unprecedented view synthesis quality using coordinate-based neural scene representations. However, NeRF’s view dependency can only handle simple reflectionsExpand
Urban Radiance Fields
The goal of this work is to perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments (e.g., StreetExpand
VaxNeRF: Revisiting the Classic for Voxel-Accelerated Neural Radiance Field
Neural Radiance Field (NeRF) is a popular method in data-driven 3D reconstruction. Given its simplicity and high quality rendering, many NeRF applications are being developed. However, NeRF’s bigExpand

References

SHOWING 1-10 OF 43 REFERENCES
Object-Centric Neural Scene Rendering
TLDR
This work proposes to learn object-centric neural scattering functions (OSFs), a representation that models per-object light transport implicitly using a lighting- and view-dependent neural network, and shows that it generalizes to novel illumination conditions, producing photorealistic, physically accurate renderings of multi-object scenes. Expand
Deferred Neural Rendering: Image Synthesis using Neural Textures
TLDR
This work proposes Neural Textures, which are learned feature maps that are trained as part of the scene capture process that can be utilized to coherently re-render or manipulate existing video content in both static and dynamic environments at real-time rates. Expand
Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations
TLDR
The proposed Scene Representation Networks (SRNs), a continuous, 3D-structure-aware scene representation that encodes both geometry and appearance, are demonstrated by evaluating them for novel view synthesis, few-shot reconstruction, joint shape and appearance interpolation, and unsupervised discovery of a non-rigid face model. Expand
Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis
TLDR
This work defines the new task of 3D controllable image synthesis and proposes an approach for solving it by reasoning both in 3D space and in the 2D image domain, and demonstrates that the model is able to disentangle latent 3D factors of simple multi-object scenes in an unsupervised fashion from raw images. Expand
In-Place Scene Labelling and Understanding with Implicit Scene Representation
TLDR
This work extends neural radiance fields (NeRF) to jointly encode semantics with appearance and geometry, so that complete and accurate 2D semantic labels can be achieved using a small amount of in-place annotations specific to the scene. Expand
Rendering synthetic objects into legacy photographs
TLDR
This work proposes a method to realistically insert synthetic objects into existing photographs without requiring access to the scene or any additional scene measurements, and shows that the method is competitive with other insertion methods while requiring less scene information. Expand
Stereo Magnification: Learning View Synthesis using Multiplane Images
TLDR
This paper explores an intriguing scenario for view synthesis: extrapolating views from imagery captured by narrow-baseline stereo cameras, including VR cameras and now-widespread dual-lens camera phones, and proposes a learning framework that leverages a new layered representation that is called multiplane images (MPIs). Expand
Neural Point-Based Graphics
We present a new point-based approach for modeling the appearance of real scenes. The approach uses a raw point cloud as the geometric representation of a scene, and augments each point with aExpand
Light field transfer: global illumination between real and synthetic objects
TLDR
This work is the first technique to allow global illumination and near-field lighting effects between both real and synthetic objects at interactive rates, without needing a geometric and material model of the real scene, by using a light field interface betweenreal and synthetic components. Expand
Differentiable Volumetric Rendering: Learning Implicit 3D Representations Without 3D Supervision
TLDR
This work proposes a differentiable rendering formulation for implicit shape and texture representations, showing that depth gradients can be derived analytically using the concept of implicit differentiation, and finds that this method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes. Expand
...
1
2
3
4
5
...