Free-viewpoint Indoor Neural Relighting from Multi-view Stereo

@article{Philip2021FreeviewpointIN,
  title={Free-viewpoint Indoor Neural Relighting from Multi-view Stereo},
  author={Julien Philip and S'ebastien Morgenthaler and Micha{\"e}l Gharbi and George Drettakis},
  journal={ACM Trans. Graph.},
  year={2021},
  volume={40},
  pages={194:1-194:18}
}
We introduce a neural relighting algorithm for captured indoors scenes, that allows interactive free-viewpoint navigation. Our method allows illumination to be changed synthetically, while coherently rendering cast shadows and complex glossy materials. We start with multiple images of the scene and a three-dimensional mesh obtained by multi-view stereo (MVS) reconstruction. We assume that lighting is well explained as the sum of a view-independent diffuse component and a view-dependent… 

References

SHOWING 1-10 OF 107 REFERENCES
Multi-view relighting using a geometry-aware network
TLDR
This work proposes the first learning-based algorithm that can relight images in a plausible and controllable manner given multiple views of an outdoor scene using a geometry-aware neural network that utilizes multiple geometry cues and source and target shadow masks computed from a noisy proxy geometry obtained by multi-view stereo.
Depth synthesis and local warps for plausible image-based navigation
TLDR
This work introduces a new IBR algorithm that is robust to missing or unreliable geometry, providing plausible novel views even in regions quite far from the input camera positions, and demonstrates novel view synthesis in real time for multiple challenging scenes with significant depth complexity.
Interactive relighting in single low-dynamic range images
TLDR
A method for users to interactively edit the illumination of a scene by moving existing lights and inserting synthetic lights into the scene that requires only a small amount of user annotation and a single low-dynamic range (LDR) image is proposed.
Deep image-based relighting from optimal sparse samples
TLDR
This work presents an image-based relighting method that can synthesize scene appearance under novel, distant illumination from the visible hemisphere, from only five images captured under pre-defined directional lights, and demonstrates, on both synthetic and real scenes, that this method is able to reproduce complex, high-frequency lighting effects like specularities and cast shadows.
Deep blending for free-viewpoint image-based rendering
TLDR
This work presents a new deep learning approach to blending for IBR, in which held-out real image data is used to learn blending weights to combine input photo contributions, and designs the network architecture and the training loss to provide high quality novel view synthesis, while reducing temporal flickering artifacts.
Multiview Intrinsic Images of Outdoors Scenes with an Application to Relighting
TLDR
The image formation model is used to express reflectance as a function of discrete visibility values for shadow and light, which allows to introduce a robust visibility classifier for pairs of points in a scene, allowing to compute high-quality reflectance and shading layers.
Neural Inverse Rendering of an Indoor Scene From a Single Image
TLDR
This work proposes the first learning based approach that jointly estimates albedo, normals, and lighting of an indoor scene from a single image, and uses physically-based rendering to create a large-scale synthetic dataset, named SUNCG-PBR, which is a significant improvement over prior datasets.
Free View Synthesis
TLDR
This work presents a method for novel view synthesis from input images that are freely distributed around a scene that can synthesize images for free camera movement through the scene, and works for general scenes with unconstrained geometric layouts.
Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a Single Image
TLDR
A deep inverse rendering framework for indoor scenes, which combines novel methods to map complex materials to existing indoor scene datasets and a new physically-based GPU renderer to create a large-scale, photorealistic indoor dataset.
Scalable inside-out image-based rendering
TLDR
The aim is to give users real-time free-viewpoint rendering of real indoor scenes, captured with off-the-shelf equipment such as a high-quality color camera and a commodity depth sensor, by designing a tiled IBR that preserves quality by economizing on the expected contributions that entire groups of input pixels make to a final image.
...
1
2
3
4
5
...