Deep Parametric Indoor Lighting Estimation

@article{Gardner2019DeepPI,
  title={Deep Parametric Indoor Lighting Estimation},
  author={Marc-Andr{\'e} Gardner and Yannick Hold-Geoffroy and Kalyan Sunkavalli and Christian Gagn{\'e} and Jean-François Lalonde},
  journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2019},
  pages={7174-7182}
}
We present a method to estimate lighting from a single image of an indoor scene. Previous work has used an environment map representation that does not account for the localized nature of indoor lighting. Instead, we represent lighting as a set of discrete 3D lights with geometric and photometric parameters. We train a deep neural network to regress these parameters from a single image, on a dataset of environment maps annotated with depth. We propose a differentiable layer to convert these… Expand
Lighting, Reflectance and Geometry Estimation from 360° Panoramic Stereo
TLDR
Both quantitative and qualitative experiments show that the method, benefiting from the 360 observation of the scene, outperforms prior state-of-the-art methods and enables more augmented reality applications such as mirror-objects insertion. Expand
Deep Lighting Environment Map Estimation from Spherical Panoramas
TLDR
This work presents a data-driven model that estimates an HDR lighting environment map from a single LDR monocular spherical panorama using a global Lambertian assumption that helps to overcome issues related to pre-baked lighting. Expand
Lighting, Reflectance and Geometry Estimation from 360$^{\circ}$ Panoramic Stereo
TLDR
Both quantitative and qualitative experiments show that the method, benefiting from the 360◦ observation of the scene, outperforms prior state-of-the-art methods and enables more augmented reality applications such as mirror-objects insertion. Expand
Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting
TLDR
A unified, learning-based inverse rendering framework that formulates 3D spatially-varying lighting and a novel Volumetric Spherical Gaussian representation for lighting, which parameterizes the exitant radiance of the 3D scene surfaces on a voxel grid are proposed. Expand
GMLight: Lighting Estimation via Geometric Distribution Approximation
TLDR
Geometric Mover’s Light (GMLight) is presented, a lighting estimation framework that employs a regression network and a generative projector for effective illumination estimation that achieves accurate illumination estimation and superior fidelity in relighting for 3D object insertion. Expand
Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination
TLDR
A deep learning model is proposed that estimates a 3D volumetric RGBA model of a scene, including content outside the observed field of view, and then uses standard volume rendering to estimate the incident illumination at any 3D location within that volume. Expand
Object-based Illumination Estimation with Rendering-aware Neural Networks
TLDR
An approach that takes advantage of physical principles from inverse rendering to constrain the solution, while also utilizing neural networks to expedite the more computationally expensive portions of its processing, to increase robustness to noisy input data as well as to improve temporal and spatial stability is proposed. Expand
Light Direction and Color Estimation from Single Image with Deep Regression
TLDR
Apart from showing good performance on synthetic images, this work proposes a preliminary procedure to obtain light positions of the Multi-Illumination dataset and proves that the trained model achieves good performance when it is applied to real scenes. Expand
PointAR: Efficient Lighting Estimation for Mobile Augmented Reality
TLDR
The pipeline, PointAR, takes a single RGB-D image captured from the mobile camera and a 2D location in that image, and estimates 2nd order spherical harmonics coefficients, which can be directly utilized by rendering engines for supporting spatially variant indoor lighting in the context of augmented reality. Expand
Indoor Lighting Estimation Using an Event Camera
TLDR
This paper introduces a novel setup to help alleviate the ambiguity based on the event camera, and demonstrates that estimating the distance of a light source becomes a well-posed problem under this setup, based on which an optimization- based method and a learning-based method are proposed. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 29 REFERENCES
Neural Illumination: Lighting Prediction for Indoor Environments
  • Shuran Song, T. Funkhouser
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
This paper proposes "Neural Illumination," a new approach that decomposes illumination prediction into several simpler differentiable sub-tasks: 1) geometry estimation, 2) scene completion, and 3) LDR-to-HDR estimation. Expand
Learning to predict indoor illumination from a single image
TLDR
An end-to-end deep neural network is trained that directly regresses a limited field-of-view photo to HDR illumination, without strong assumptions on scene geometry, material properties, or lighting, which allows to automatically recover high-quality HDR illumination estimates that significantly outperform previous state- of-the-art methods. Expand
Deep Outdoor Illumination Estimation
TLDR
It is demonstrated that the approach allows the recovery of plausible illumination conditions and enables photorealistic virtual object insertion from a single image and significantly outperforms previous solutions to this problem. Expand
Estimating the Natural Illumination Conditions from a Single Outdoor Image
TLDR
Given a single outdoor image, a method for estimating the likely illumination conditions of the scene is presented and it is shown how to realistically insert synthetic 3-D objects into the scene, and how to transfer appearance across images while keeping the illumination consistent. Expand
Multiple Light Source Estimation in a Single Image
TLDR
This paper presents a new method to estimate the illumination in a single image as a combination of achromatic lights with their 3D directions and relative intensities, and presents a novel surface normal approximation using an osculating arc for the estimation of zenith angles. Expand
DeepLight: Learning Illumination for Unconstrained Mobile Mixed Reality
TLDR
The authors' inference runs at interactive frame rates on a mobile device, enabling realistic rendering of virtual objects into real scenes for mobile mixed reality and improves the realism of rendered objects compared to the state-of-the art methods for both indoor and outdoor scenes. Expand
Learning to reconstruct shape and spatially-varying reflectance from a single image
TLDR
This work demonstrates that it can recover non-Lambertian, spatially-varying BRDFs and complex geometry belonging to any arbitrary shape class, from a single RGB image captured under a combination of unknown environment illumination and flash lighting. Expand
Intrinsic Scene Properties from a Single RGB-D Image
In this paper, we present a technique for recovering a model of shape, illumination, reflectance, and shading from a single image taken from an RGB-D sensor. To do this, we extend the SIRFS (“shape,Expand
LIME: Live Intrinsic Material Estimation
TLDR
This work presents the first end-to-end approach for real-time material estimation for general object shapes with uniform material that only requires a single color image as input and proposes a novel highly efficient perceptual rendering loss that mimics real-world image formation and obtains intermediate results even during run time. Expand
Matterport3D: Learning from RGB-D Data in Indoor Environments
TLDR
Matterport3D is introduced, a large-scale RGB-D dataset containing 10,800 panoramic views from 194,400RGB-D images of 90 building-scale scenes that enable a variety of supervised and self-supervised computer vision tasks, including keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and region classification. Expand
...
1
2
3
...