NeRD: Neural Reflectance Decomposition from Image Collections
@article{Boss2021NeRDNR, title={NeRD: Neural Reflectance Decomposition from Image Collections}, author={Mark Boss and Raphael Braun and V. Jampani and Jonathan T. Barron and Ce Liu and Hendrik P. A. Lensch}, journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)}, year={2021}, pages={12664-12674} }
Decomposing a scene into its shape, reflectance, and illumination is a challenging but important problem in computer vision and graphics. This problem is inherently more challenging when the illumination is not a single light source under laboratory conditions but is instead an unconstrained environmental illumination. Though recent work has shown that implicit representations can be used to model the radiance field of an object, most of these techniques only enable view synthesis and not…
Figures and Tables from this paper
74 Citations
GANcraft: Unsupervised 3D Neural Rendering of Minecraft Worlds
- Computer Science2021 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2021
GANcraft is presented, an unsupervised neural rendering framework for generating photorealistic images of large 3D block worlds such as those created in Minecraft, and allows user control over both scene semantics and output style.
Single-image Full-body Human Relighting
- Computer ScienceEGSR
- 2021
A new deep learning architecture is proposed, tailored to the decomposition performed in PRT, that is trained using a combination of L1, logarithmic, and rendering losses and outperforms the state of the art for full-body human relighting both with synthetic images and photographs.
DONeRF: Towards Real-Time Rendering of Neural Radiance Fields using Depth Oracle Networks
- Computer ScienceArXiv
- 2021
While reducing the execution and training time by up to 48×, the authors also achieve better quality across all scenes (NeRF achieves an average PSNR of 30.04 dB vs their 31.62 dB), and DONeRF requires only 4 samples per pixel thanks to a depth oracle network to guide sample placement, while NeRF uses 192 (64 + 128).
Neural Scene Representations for View Synthesis
- Psychology
- 2020
Neural Scene Representations for View Synthesis
Neural 3D Video Synthesis
- Business, EducationArXiv
- 2021
This research highlights the need to understand more fully the role of social media in the development of self-consistency and the role that social media plays in the design and exploration of reality.
ERF: Explicit Radiance Field Reconstruction From Scratch
- Computer ScienceArXiv
- 2022
A novel explicit dense 3D reconstruction approach that processes a set of images of a scene with sensor poses and calibrations and estimates a photo-real digital model that can reconstruct models of high quality that are comparable to state-of-the-art implicit methods.
Extracting Triangular 3D Models, Materials, and Lighting From Images
- Computer ScienceArXiv
- 2021
This work outputs triangle meshes with spatially-varying materials and environment lighting that can be deployed in any traditional graphics engine unmodified, and introduces a differentiable formulation of the split sum approximation of environment lighting to efficiently recover all-frequency lighting.
3D-GIF: 3D-Controllable Object Generation via Implicit Factorized Representations
- Computer ScienceArXiv
- 2022
This work proposes the factorized representations which are view-independent and light-disentangled, and training schemes with randomly sampled light conditions, and is the first work that extracts albedo-textured meshes utilizing unposed 2D images without any additional labels or assumptions.
NeILF: Neural Incident Light Field for Physically-based Material Estimation
- Computer ScienceArXiv
- 2022
We present a differentiable rendering framework for material and lighting estimation from multi-view images and a reconstructed geometry. In the framework, we represent scene lightings as the Neural…
NeROIC: Neural Rendering of Objects from Online Image Collections
- Computer ScienceArXiv
- 2022
This work presents a novel method to acquire object representations from online image collections, capturing high-quality geometry and material properties of arbitrary objects from photographs with varying cameras, illumination, and backgrounds, and introduces a robust normal estimation technique which eliminates the effect of geometric noise while retaining crucial details.
References
SHOWING 1-10 OF 79 REFERENCES
Neural Reflectance Fields for Appearance Acquisition
- Computer Science, MathematicsArXiv
- 2020
It is demonstrated that neural reflectance fields can be estimated from images captured with a simple collocated camera-light setup, and accurately model the appearance of real-world scenes with complex geometry and reflectance, and enable a complete pipeline from high-quality and practical appearance acquisition to 3D scene composition and rendering.
Neural Inverse Rendering of an Indoor Scene From a Single Image
- Computer Science2019 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2019
This work proposes the first learning based approach that jointly estimates albedo, normals, and lighting of an indoor scene from a single image, and uses physically-based rendering to create a large-scale synthetic dataset, named SUNCG-PBR, which is a significant improvement over prior datasets.
Single-image SVBRDF capture with a rendering-aware deep network
- Computer ScienceACM Trans. Graph.
- 2018
This work tackles lightweight appearance capture by training a deep neural network to automatically extract and make sense of visual cues from a single image, and designs a network that combines an encoder-decoder convolutional track for local feature extraction with a fully-connected track for global feature extraction and propagation.
Learning Data-Driven Reflectance Priors for Intrinsic Image Decomposition
- Mathematics, Computer Science2015 IEEE International Conference on Computer Vision (ICCV)
- 2015
A model is trained to predict relative reflectance ordering between image patches from large-scale human annotations, producing a data-driven reflectance prior and it is shown how to naturally integrate this learned prior into existing energy minimization frame-works for intrinsic image decomposition.
Reflectance modeling by neural texture synthesis
- Computer ScienceACM Trans. Graph.
- 2016
This work makes use of a recent, powerful texture descriptor based on deep convolutional neural network statistics for "softly" comparing the model prediction and the examplars without requiring an explicit point-to-point correspondence between them to capture rich, spatially varying parametric reflectance models from a single image.
Learning Intrinsic Image Decomposition from Watching the World
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
This paper explores a different approach to learning intrinsic images: observing image sequences over time depicting the same scene under changing illumination, and learning single-view decompositions that are consistent with these changes.
Reflectance Adaptive Filtering Improves Intrinsic Image Estimation
- Computer Science2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2017
The results show a simple pixel-wise decision, without any context or prior knowledge, is sufficient to provide a strong baseline on IIW and suggest that the effect of learning-based approaches may have been over-estimated so far.
CGIntrinsics: Better Intrinsic Image Decomposition through Physically-Based Rendering
- Computer ScienceECCV
- 2018
CGIntrinsics, a new, large-scale dataset of physically-based rendered images of scenes with full ground truth decompositions, is presented, demonstrating the suprising effectiveness of carefully-rendered synthetic data for the intrinsic images task.
Intrinsic Scene Decomposition from RGB-D Images
- Computer Science2015 IEEE International Conference on Computer Vision (ICCV)
- 2015
This paper addresses the problem of computing an intrinsic decomposition of the colors of a surface into an albedo and a shading term, using an affine shading model, a combination of a Lambertian model, and an ambient lighting term.
Intrinsic Decomposition of Image Sequences from Local Temporal Variations
- Computer Science2015 IEEE International Conference on Computer Vision (ICCV)
- 2015
This work derives an adaptive local energy from the observations of each local neighborhood over time, and integrates distant pairwise constraints to enforce coherent decomposition across all surfaces with consistent shading changes.