Learning to predict indoor illumination from a single image

@article{Gardner2017LearningTP,
  title={Learning to predict indoor illumination from a single image},
  author={Marc-Andr{\'e} Gardner and Kalyan Sunkavalli and Ersin Yumer and Xiaohui Shen and Emiliano Gambaretto and Christian Gagn{\'e} and Jean-François Lalonde},
  journal={ACM Transactions on Graphics (TOG)},
  year={2017},
  volume={36},
  pages={1 - 14}
}
We propose an automatic method to infer high dynamic range illumination from a single, limited field-of-view, low dynamic range photograph of an indoor scene. [] Key Method We show that this can be accomplished in a three step process: 1) we train a robust lighting classifier to automatically annotate the location of light sources in a large dataset of LDR environment maps, 2) we use these annotations to train a deep neural network that predicts the location of lights in a scene from a single limited field…

Neural Illumination: Lighting Prediction for Indoor Environments

  • S. SongT. Funkhouser
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
This paper proposes "Neural Illumination," a new approach that decomposes illumination prediction into several simpler differentiable sub-tasks: 1) geometry estimation, 2) scene completion, and 3) LDR-to-HDR estimation.

Learning to Estimate Indoor Lighting from 3D Objects

TLDR
A deep learning method is developed that is able to encode the latent space of indoor lighting using few parameters that is trained on a database of environment maps to generate predictions of the environment light that are both more realistic and accurate than previous methods.

Deep Spherical Gaussian Illumination Estimation for Indoor Scene

TLDR
A learning-based method to estimate high dynamic range (HDR) indoor illumination from only a single low dynamicrange (LDR) photograph of limited field-of-view using Spherical Gaussian functions with fixed centering directions and bandwidth and only allow the weights vary.

Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination

TLDR
A deep learning model is proposed that estimates a 3D volumetric RGBA model of a scene, including content outside the observed field of view, and then uses standard volume rendering to estimate the incident illumination at any 3D location within that volume.

From Faces to Outdoor Light Probes

TLDR
This paper presents an approach to directly estimate an HDR light probe from a single LDR photograph, shot outdoors with a consumer camera, without specialized calibration targets or equipment, and shows that relighting objects with HDR light probes estimated by the method yields realistic results in a wide variety of settings.

Deep Parametric Indoor Lighting Estimation

TLDR
It is demonstrated, via quantitative and qualitative evaluations, that the representation and training scheme lead to more accurate results compared to previous work, while allowing for more realistic 3D object compositing with spatially-varying lighting.

A Dataset of Multi-Illumination Images in the Wild

TLDR
A new multi-illumination dataset of more than 1000 real scenes, each captured in high dynamic range and high resolution, under 25 lighting conditions is introduced, demonstrating the richness of this dataset by training state-of-the-art models for three challenging applications: single-image illumination estimation, image relighting, and mixed-illuminant white balance.

Neural Inverse Rendering of an Indoor Scene From a Single Image

TLDR
This work proposes the first learning based approach that jointly estimates albedo, normals, and lighting of an indoor scene from a single image, and uses physically-based rendering to create a large-scale synthetic dataset, named SUNCG-PBR, which is a significant improvement over prior datasets.

DeepLight: Learning Illumination for Unconstrained Mobile Mixed Reality

TLDR
The authors' inference runs at interactive frame rates on a mobile device, enabling realistic rendering of virtual objects into real scenes for mobile mixed reality and improves the realism of rendered objects compared to the state-of-the art methods for both indoor and outdoor scenes.

Deep Lighting Environment Map Estimation from Spherical Panoramas

TLDR
This work presents a data-driven model that estimates an HDR lighting environment map from a single LDR monocular spherical panorama using a global Lambertian assumption that helps to overcome issues related to pre-baked lighting.
...

References

SHOWING 1-10 OF 38 REFERENCES

Deep Outdoor Illumination Estimation

TLDR
It is demonstrated that the approach allows the recovery of plausible illumination conditions and enables photorealistic virtual object insertion from a single image and significantly outperforms previous solutions to this problem.

DeLight-Net: Decomposing Reflectance Maps into Specular Materials and Natural Illumination

TLDR
A Convolutional Neural Network architecture is proposed to reconstruct both material parameters as well as illumination from a reflectance map, i.e. a single 2D image of a sphere of one material under one illumination, that is solely trained on synthetic data.

Deep Reflectance Maps

TLDR
A convolutional neural architecture to estimate reflectance maps of specular materials in natural lighting conditions is proposed in an end-to-end learning formulation that directly predicts a reflectance map from the image itself.

Intrinsic Scene Properties from a Single RGB-D Image

In this paper, we present a technique for recovering a model of shape, illumination, reflectance, and shading from a single image taken from an RGB-D sensor. To do this, we extend the SIRFS (“shape,

Learning Data-Driven Reflectance Priors for Intrinsic Image Decomposition

TLDR
A model is trained to predict relative reflectance ordering between image patches from large-scale human annotations, producing a data-driven reflectance prior and it is shown how to naturally integrate this learned prior into existing energy minimization frame-works for intrinsic image decomposition.

Material recognition in the wild with the Materials in Context Database

TLDR
A new, large-scale, open dataset of materials in the wild, the Materials in Context Database (MINC), is introduced, and convolutional neural networks are trained for two tasks: classifying materials from patches, and simultaneous material recognition and segmentation in full images.

Reflectance and Illumination Recovery in the Wild

TLDR
A reflectance model and priors are developed that precisely capture the space of real-world object reflectance and a flexible illumination model that can represent real- world illumination with priors that combat the deleterious effects of image formation.

Lightweight binocular facial performance capture under uncontrolled lighting

TLDR
This approach is the first to capture facial performances of such high quality from a single stereo rig and it is demonstrated that it brings facial performance capture out of the studio, into the wild, and within the reach of everybody.

Marr Revisited: 2D-3D Alignment via Surface Normal Prediction

TLDR
A skip-network model built on the pre-trained Oxford VGG convolutional neural network (CNN) for surface normal prediction achieves state-of-the-art accuracy on the NYUv2 RGB-D dataset, and recovers fine object detail compared to previous methods.

EnvyDepth: An Interface for Recovering Local Natural Illumination from Environment Maps

TLDR
EnvyDepth, an interface for recovering local illumination from a single HDR environment map, uses edit propagation to create a detailed collection of virtual point lights that reproduce both the local and the distant lighting effects in the original scene.