Free-viewpoint Indoor Neural Relighting from Multi-view Stereo

  title={Free-viewpoint Indoor Neural Relighting from Multi-view Stereo},
  author={Julien Philip and S'ebastien Morgenthaler and Micha{\"e}l Gharbi and George Drettakis},
  journal={ACM Transactions on Graphics (TOG)},
  pages={1 - 18}
We introduce a neural relighting algorithm for captured indoors scenes, that allows interactive free-viewpoint navigation. Our method allows illumination to be changed synthetically, while coherently rendering cast shadows and complex glossy materials. We start with multiple images of the scene and a three-dimensional mesh obtained by multi-view stereo (MVS) reconstruction. We assume that lighting is well explained as the sum of a view-independent diffuse component and a view-dependent glossy… 
3 Citations
OutCast: Outdoor Single-image Relighting with Cast Shadows
A learned image space ray-marching layer that converts the approximate depth map into a deep 3D representation that is fused into occlusion queries using a learned traversal is proposed.
Shadow Layers for Participating Media
This paper generalises shadow layers to an arbitrary number of occluders, and proposes a prototype implementation that renders the main image and shadow layers in a single pass with an affordable computational overhead.
Neural Precomputed Radiance Transfer
Four different neural network architectures are introduced, and it is shown that those based on knowledge of light transport models and PRT-inspired principles improve the quality of global illumination predictions at equal training time and network size, without the need for high-end ray-tracing hardware.


Multi-view relighting using a geometry-aware network
This work proposes the first learning-based algorithm that can relight images in a plausible and controllable manner given multiple views of an outdoor scene using a geometry-aware neural network that utilizes multiple geometry cues and source and target shadow masks computed from a noisy proxy geometry obtained by multi-view stereo.
Depth synthesis and local warps for plausible image-based navigation
This work introduces a new IBR algorithm that is robust to missing or unreliable geometry, providing plausible novel views even in regions quite far from the input camera positions, and demonstrates novel view synthesis in real time for multiple challenging scenes with significant depth complexity.
Interactive relighting in single low-dynamic range images
A method for users to interactively edit the illumination of a scene by moving existing lights and inserting synthetic lights into the scene that requires only a small amount of user annotation and a single low-dynamic range (LDR) image is proposed.
Deep image-based relighting from optimal sparse samples
This work presents an image-based relighting method that can synthesize scene appearance under novel, distant illumination from the visible hemisphere, from only five images captured under pre-defined directional lights, and demonstrates, on both synthetic and real scenes, that this method is able to reproduce complex, high-frequency lighting effects like specularities and cast shadows.
Deep blending for free-viewpoint image-based rendering
This work presents a new deep learning approach to blending for IBR, in which held-out real image data is used to learn blending weights to combine input photo contributions, and designs the network architecture and the training loss to provide high quality novel view synthesis, while reducing temporal flickering artifacts.
Multiview Intrinsic Images of Outdoors Scenes with an Application to Relighting
The image formation model is used to express reflectance as a function of discrete visibility values for shadow and light, which allows to introduce a robust visibility classifier for pairs of points in a scene, allowing to compute high-quality reflectance and shading layers.
Neural Inverse Rendering of an Indoor Scene From a Single Image
This work proposes the first learning based approach that jointly estimates albedo, normals, and lighting of an indoor scene from a single image, and uses physically-based rendering to create a large-scale synthetic dataset, named SUNCG-PBR, which is a significant improvement over prior datasets.
Free View Synthesis
This work presents a method for novel view synthesis from input images that are freely distributed around a scene that can synthesize images for free camera movement through the scene, and works for general scenes with unconstrained geometric layouts.
Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a Single Image
A deep inverse rendering framework for indoor scenes, which combines novel methods to map complex materials to existing indoor scene datasets and a new physically-based GPU renderer to create a large-scale, photorealistic indoor dataset.
Scalable inside-out image-based rendering
The aim is to give users real-time free-viewpoint rendering of real indoor scenes, captured with off-the-shelf equipment such as a high-quality color camera and a commodity depth sensor, by designing a tiled IBR that preserves quality by economizing on the expected contributions that entire groups of input pixels make to a final image.