Neural Inverse Rendering of an Indoor Scene From a Single Image

@article{Sengupta2019NeuralIR,
  title={Neural Inverse Rendering of an Indoor Scene From a Single Image},
  author={Soumyadip Sengupta and Jinwei Gu and Kihwan Kim and Guilin Liu and David W. Jacobs and Jan Kautz},
  journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2019},
  pages={8597-8606}
}
Inverse rendering aims to estimate physical attributes of a scene, e.g., reflectance, geometry, and lighting, from image(s. [] Key Method This enables us to perform self-supervised learning on real data using a reconstruction loss, based on re-synthesizing the input image from the estimated components. We finetune with real data after pretraining with synthetic data. To this end, we use physically-based rendering to create a large-scale synthetic dataset, which is a significant improvement over prior…

Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting

A unified, learning-based inverse rendering framework that formulates 3D spatially-varying lighting and a novel Volumetric Spherical Gaussian representation for lighting, which parameterizes the exitant radiance of the 3D scene surfaces on a voxel grid are proposed.

Learning-based Inverse Rendering of Complex Indoor Scenes with Differentiable Monte Carlo Raytracing

This work presents an end-to-end, learning-based inverse rendering framework incorporating differentiable Monte Carlo raytracing with importance sampling, and introduces a physically-based differentiable rendering layer with screen-space ray tracing, resulting in more realistic specular reflections that match the input photo.

Physically-Based Editing of Indoor Scene Lighting from a Single Image

This work presents the first automatic method for full scene relighting from a single image, including light source insertion, removal, and replacement, and shows that this careful combination can for the first time handle challenging scene editing applications including object insertion, light source inserted and with realistic global effects.

PhyIR: Physics-based Inverse Rendering for Panoramic Indoor Images

PhyIR is presented, a neural inverse rendering method with a more completed SVBRDF representation and a physics-based in-network rendering layer, which can handle complex material and incorporate physical constraints by re-rendering realistic and detailed specular reflectance.

Outdoor inverse rendering from a single image using multiview self-supervision

  • Ye YuW. Smith
  • Computer Science
    IEEE transactions on pattern analysis and machine intelligence
  • 2021
This paper shows how to perform scene-level inverse rendering to recover shape, reflectance and lighting from a single, uncontrolled image using a fully convolutional neural network and believes this is the first attempt to use MVS supervision for learning inverse rendering.

Modeling Indirect Illumination for Inverse Rendering

This paper proposes a novel approach to efficiently recovering spatially-varying indirect illumination, which can be conveniently derived from the neural radiance field learned from input images instead of being estimated jointly with direct illumination and materials.

IRISformer: Dense Vision Transformers for Single-Image Inverse Rendering in Indoor Scenes

This work proposes a transformer architecture to simultaneously estimate depths, normals, spatially-varying albedo, roughness and lighting from a single image of an indoor scene, enabling applications like object insertion and material editing in a single unconstrained real image, with greater photorealism than prior works.

Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a Single Image

A deep inverse rendering framework for indoor scenes, which combines novel methods to map complex materials to existing indoor scene datasets and a new physically-based GPU renderer to create a large-scale, photorealistic indoor dataset.

Object-based Illumination Estimation with Rendering-aware Neural Networks

An approach that takes advantage of physical principles from inverse rendering to constrain the solution, while also utilizing neural networks to expedite the more computationally expensive portions of its processing, to increase robustness to noisy input data as well as to improve temporal and spatial stability is proposed.

Multi-view Inverse Rendering for Large-scale Real-world Indoor Scenes

A hybrid lighting representation with precomputed irradiance is proposed, which significantly improves the efficiency and alleviate the rendering noise in the material optimization, and enables physically-reasonable mixed-reality applications such as material editing, editable novel view synthesis and relighting.
...

References

SHOWING 1-10 OF 63 REFERENCES

Learning to reconstruct shape and spatially-varying reflectance from a single image

This work demonstrates that it can recover non-Lambertian, spatially-varying BRDFs and complex geometry belonging to any arbitrary shape class, from a single RGB image captured under a combination of unknown environment illumination and flash lighting.

Physically-Based Rendering for Indoor Scene Understanding Using Convolutional Neural Networks

This work introduces a large-scale synthetic dataset with 500K physically-based rendered images from 45K realistic 3D indoor scenes and shows that pretraining with this new synthetic dataset can improve results beyond the current state of the art on all three computer vision tasks.

Learning to predict indoor illumination from a single image

An end-to-end deep neural network is trained that directly regresses a limited field-of-view photo to HDR illumination, without strong assumptions on scene geometry, material properties, or lighting, which allows to automatically recover high-quality HDR illumination estimates that significantly outperform previous state- of-the-art methods.

DeLight-Net: Decomposing Reflectance Maps into Specular Materials and Natural Illumination

A Convolutional Neural Network architecture is proposed to reconstruct both material parameters as well as illumination from a reflectance map, i.e. a single 2D image of a sphere of one material under one illumination, that is solely trained on synthetic data.

Deep image-based relighting from optimal sparse samples

This work presents an image-based relighting method that can synthesize scene appearance under novel, distant illumination from the visible hemisphere, from only five images captured under pre-defined directional lights, and demonstrates, on both synthetic and real scenes, that this method is able to reproduce complex, high-frequency lighting effects like specularities and cast shadows.

LIME: Live Intrinsic Material Estimation

This work presents the first end-to-end approach for real-time material estimation for general object shapes with uniform material that only requires a single color image as input and proposes a novel highly efficient perceptual rendering loss that mimics real-world image formation and obtains intermediate results even during run time.

InverseRenderNet: Learning Single Image Inverse Rendering

  • Ye YuW. Smith
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
This work shows how to train a fully convolutional neural network to perform inverse rendering from a single, uncontrolled image, and believes this is the first attempt to use MVS supervision for learning inverse rendering.

CGIntrinsics: Better Intrinsic Image Decomposition through Physically-Based Rendering

CGIntrinsics, a new, large-scale dataset of physically-based rendered images of scenes with full ground truth decompositions, is presented, demonstrating the suprising effectiveness of carefully-rendered synthetic data for the intrinsic images task.

Neural Inverse Rendering for General Reflectance Photometric Stereo

A physics based unsupervised learning framework where surface normals and BRDFs are predicted by the network and fed into the rendering equation to synthesize observed images, which is shown to achieve the state-of-the-art performance on a challenging real-world scene benchmark.

Inverse Transport Networks

The experiments demonstrate that inverse transport networks can be trained efficiently using differentiable rendering, and that they generalize to scenes with completely unseen geometry and illumination better than networks trained without appearance- matching regularization.
...