• Corpus ID: 237492104

Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting

@article{Wang2021LearningII,
  title={Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting},
  author={Zian Wang and Jonah Philion and Sanja Fidler and Jan Kautz},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.06061}
}
In this work, we address the problem of jointly estimating albedo, normals, depth and 3D spatially-varying lighting from a single image. Most existing methods formulate the task as image-to-image translation, ignoring the 3D properties of the scene. However, indoor scenes contain complex 3D light transport where a 2D representation is insufficient. In this paper, we propose a unified, learning-based inverse rendering framework that formulates 3D spatially-varying lighting. Inspired by classic… 

Figures and Tables from this paper

SIRfyN: Single Image Relighting from your Neighbors
TLDR
It is shown how to relight a scene, depicted in a single image, such that the overall shading has changed and the resulting image looks like a natural image of that scene, and an extension of the FID allows per-generated-image evaluation.
DIB-R++: Learning to Predict Lighting and Material with a Hybrid Differentiable Renderer
TLDR
DIBR++ is proposed, a hybrid differentiable renderer which supports these photorealistic effects by combining rasterization and ray-tracing, taking the advantage of their respective strengths—speed and realism.

References

SHOWING 1-10 OF 45 REFERENCES
Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a Single Image
TLDR
A deep inverse rendering framework for indoor scenes, which combines novel methods to map complex materials to existing indoor scene datasets and a new physically-based GPU renderer to create a large-scale, photorealistic indoor dataset.
Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination
TLDR
A deep learning model is proposed that estimates a 3D volumetric RGBA model of a scene, including content outside the observed field of view, and then uses standard volume rendering to estimate the incident illumination at any 3D location within that volume.
Indoor Segmentation and Support Inference from RGBD Images
TLDR
The goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships, to better understand how 3D cues can best inform a structured 3D interpretation.
Fast Spatially-Varying Indoor Lighting Estimation
TLDR
This work proposes a real-time method to estimate spatially-varying indoor lighting from a single RGB image, and demonstrates, through quantitative experiments, that the results achieve lower lighting estimation errors and are preferred by users over the state-of-the-art.
Shape, Illumination, and Reflectance from Shading
TLDR
The technique can be viewed as a superset of several classic computer vision problems (shape-from-shading, intrinsic images, color constancy, illumination estimation, etc) and outperforms all previous solutions to those constituent problems.
Intrinsic images in the wild
TLDR
This paper introduces Intrinsic Images in the Wild, a large-scale, public dataset for evaluating intrinsic image decompositions of indoor scenes, and develops a dense CRF-based intrinsic image algorithm for images in the wild that outperforms a range of state-of-the-art intrinsic image algorithms.
Neural Inverse Rendering of an Indoor Scene From a Single Image
TLDR
This work proposes the first learning based approach that jointly estimates albedo, normals, and lighting of an indoor scene from a single image, and uses physically-based rendering to create a large-scale synthetic dataset, named SUNCG-PBR, which is a significant improvement over prior datasets.
Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Shapes
TLDR
An efficient neural representation is introduced that enables real-time rendering of high-fidelity neural SDFs, while achieving state-of-the-art geometry reconstruction quality, and is 2–3 orders of magnitude more efficient in terms of rendering speed.
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
TLDR
This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.
Neural Point-Based Graphics
We present a new point-based approach for modeling the appearance of real scenes. The approach uses a raw point cloud as the geometric representation of a scene, and augments each point with a
...
1
2
3
4
5
...