• Corpus ID: 239998218

Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition

@article{Boss2021NeuralPILNP,
  title={Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition},
  author={Mark Boss and Varun Jampani and Raphael Braun and Ce Liu and Jonathan T. Barron and Hendrik P. A. Lensch},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.14373}
}
Decomposing a scene into its shape, reflectance and illumination is a fundamental problem in computer vision and graphics. Neural approaches such as NeRF have achieved remarkable success in view synthesis, but do not explicitly perform decomposition and instead operate exclusively on radiance (the product of reflectance and illumination). Extensions to NeRF, such as NeRD, can perform decomposition but struggle to accurately recover detailed illumination, thereby significantly limiting realism… 

Figures and Tables from this paper

Extracting Triangular 3D Models, Materials, and Lighting From Images
TLDR
This work outputs triangle meshes with spatially-varying materials and environment lighting that can be deployed in any traditional graphics engine unmodified, and introduces a differentiable formulation of the split sum approximation of environment lighting to efficiently recover all-frequency lighting.
Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields
TLDR
Ref-NeRF is introduced, which replaces NeRF’s parameterization of view-dependent outgoing radiance with a representation of reflected radiance and structures this function using a collection of spatially-varying scene properties and shows that together with a regularizer on normal vectors, this model significantly improves the realism and accuracy of specular reflections.
NeROIC: Neural Rendering of Objects from Online Image Collections
TLDR
This work presents a novel method to acquire object representations from online image collections, capturing high-quality geometry and material properties of arbitrary objects from photographs with varying cameras, illumination, and backgrounds, and introduces a robust normal estimation technique which eliminates the effect of geometric noise while retaining crucial details.
VoLux-GAN: A Generative Model for 3D Face Synthesis with HDRI Relighting
TLDR
VoLux-GAN, a generative framework to synthesize 3D-aware faces with convincing relighting, is proposed, a volumetric HDRI relighting method that can efficiently accumulate albedo, diffuse and specular lighting contributions along each 3D ray for any desired HDR environmental map.

References

SHOWING 1-10 OF 68 REFERENCES
Single-image SVBRDF capture with a rendering-aware deep network
TLDR
This work tackles lightweight appearance capture by training a deep neural network to automatically extract and make sense of visual cues from a single image, and designs a network that combines an encoder-decoder convolutional track for local feature extraction with a fully-connected track for global feature extraction and propagation.
Reflectance modeling by neural texture synthesis
TLDR
This work makes use of a recent, powerful texture descriptor based on deep convolutional neural network statistics for "softly" comparing the model prediction and the examplars without requiring an explicit point-to-point correspondence between them to capture rich, spatially varying parametric reflectance models from a single image.
Neural Reflectance Fields for Appearance Acquisition
TLDR
It is demonstrated that neural reflectance fields can be estimated from images captured with a simple collocated camera-light setup, and accurately model the appearance of real-world scenes with complex geometry and reflectance, and enable a complete pipeline from high-quality and practical appearance acquisition to 3D scene composition and rendering.
Learning to reconstruct shape and spatially-varying reflectance from a single image
TLDR
This work demonstrates that it can recover non-Lambertian, spatially-varying BRDFs and complex geometry belonging to any arbitrary shape class, from a single RGB image captured under a combination of unknown environment illumination and flash lighting.
Deep image-based relighting from optimal sparse samples
TLDR
This work presents an image-based relighting method that can synthesize scene appearance under novel, distant illumination from the visible hemisphere, from only five images captured under pre-defined directional lights, and demonstrates, on both synthetic and real scenes, that this method is able to reproduce complex, high-frequency lighting effects like specularities and cast shadows.
Neural Inverse Rendering of an Indoor Scene From a Single Image
TLDR
This work proposes the first learning based approach that jointly estimates albedo, normals, and lighting of an indoor scene from a single image, and uses physically-based rendering to create a large-scale synthetic dataset, named SUNCG-PBR, which is a significant improvement over prior datasets.
Deep view synthesis from sparse photometric images
TLDR
This paper synthesizes novel viewpoints across a wide range of viewing directions (covering a 60° cone) from a sparse set of just six viewing directions, based on a deep convolutional network trained to directly synthesize new views from the six input views.
Neural Illumination: Lighting Prediction for Indoor Environments
  • Shuran Song, T. Funkhouser
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
This paper proposes "Neural Illumination," a new approach that decomposes illumination prediction into several simpler differentiable sub-tasks: 1) geometry estimation, 2) scene completion, and 3) LDR-to-HDR estimation.
Generative modelling of BRDF textures from flash images
TLDR
This work learns a latent space for easy capture, consistent interpolation, and efficient reproduction of visual material appearance that allows rendering in complex scenes and illuminations, matching the appearance of the input photograph.
Learning to predict indoor illumination from a single image
TLDR
An end-to-end deep neural network is trained that directly regresses a limited field-of-view photo to HDR illumination, without strong assumptions on scene geometry, material properties, or lighting, which allows to automatically recover high-quality HDR illumination estimates that significantly outperform previous state- of-the-art methods.
...
1
2
3
4
5
...