GMLight: Lighting Estimation via Geometric Distribution Approximation

@article{Zhan2022GMLightLE,
  title={GMLight: Lighting Estimation via Geometric Distribution Approximation},
  author={Fangneng Zhan and Yingchen Yu and Rongliang Wu and Changgong Zhang and Shijian Lu and Ling Shao and Feiying Ma and Xuansong Xie},
  journal={IEEE Transactions on Image Processing},
  year={2022},
  volume={31},
  pages={2268-2278}
}
Inferring the scene illumination from a single image is an essential yet challenging task in computer vision and computer graphics. Existing works estimate lighting by regressing representative illumination parameters or generating illumination maps directly. However, these methods often suffer from poor accuracy and generalization. This paper presents Geometric Mover’s Light (GMLight), a lighting estimation framework that employs a regression network and a generative projector for effective… 

EMLight: Lighting Estimation via Spherical Distribution Approximation

A novel spherical mover's loss is designed that guides to regress light distribution parameters accurately by taking advantage of the subtleties of spherical distribution and under the guidance of the predicted spherical distribution, light intensity and ambient term, the neural projector synthesizes panoramic illumination maps with realistic light frequency.

Sparse Needlets for Lighting Estimation with Spherical Transport Loss

NeedleLight is presented, a new lighting estimation model that represents illumination with needlets and allows lighting estimation in both frequency domain and spatial domain jointly and a new metric is proposed that is concise yet effective by directly evaluating the estimated illumination maps rather than rendered images.

Deep Neural Models for Illumination Estimation and Relighting: A Survey

This contribution aims to bring together in a coherent manner current advances in this conjunction, presented in three categories: scene illumination estimation, relighting with reflectance‐aware scene‐specific representations and finally relighting as image‐to‐image transformations.

Rendering-Aware HDR Environment Map Prediction from a Single Image

This work builds a generative adversarial network to synthesize an HDR environment map that enables realistic rendering effects and specifically considers the rendering effect by supervising the networks using rendering losses in both stages, on the predicted environment map as well as the hybrid illumination representation.

Designing an Illumination-Aware Network for Deep Image Relighting

An Illumination-Aware Network (IAN) is presented which follows the guidance from hierarchical sampling to progressively relight a scene from a single image with high efficiency and introduces a depth-guided geometry encoder for acquiring valuable geometry- and structure-related representations once the depth information is available.

IRISformer: Dense Vision Transformers for Single-Image Inverse Rendering in Indoor Scenes

This work proposes a transformer architecture to simultaneously estimate depths, normals, spatially-varying albedo, roughness and lighting from a single image of an indoor scene, enabling applications like object insertion and material editing in a single unconstrained real image, with greater photorealism than prior works.

Editable Indoor Lighting Estimation

Quantitative and qualitative results show that the approach makes indoor lighting estimation easier to handle by a casual user, while still producing competitive results.

VMRF: View Matching Neural Radiance Fields

VMRF is designed, an innovative view matching NeRF that enables effective NeRF training without requiring prior knowledge in camera poses or camera pose distributions and outperforms the state-of-the-art qualitatively and quantitatively by large margins.

Towards Realistic 3D Embedding via View Alignment

An innovative View Alignment GAN that composes new images by embedding 3D models into 2D background images realistically and automatically that achieves high-fidelity composition qualitatively and quantitatively as compared with state-of-the-art generation methods.

Auto-regressive Image Synthesis with Integrated Quantization

This paper designs an integrated quantization scheme with a variational regularizer that mingles the feature discretization in multiple domains, and markedly boosts the auto-regressive modeling performance and designs a Gumbel sampling strategy that allows to incorporate distribution uncertainty into theAuto-regression training procedure.

References

SHOWING 1-10 OF 53 REFERENCES

EMLight: Lighting Estimation via Spherical Distribution Approximation

A novel spherical mover's loss is designed that guides to regress light distribution parameters accurately by taking advantage of the subtleties of spherical distribution and under the guidance of the predicted spherical distribution, light intensity and ambient term, the neural projector synthesizes panoramic illumination maps with realistic light frequency.

Sparse Needlets for Lighting Estimation with Spherical Transport Loss

NeedleLight is presented, a new lighting estimation model that represents illumination with needlets and allows lighting estimation in both frequency domain and spatial domain jointly and a new metric is proposed that is concise yet effective by directly evaluating the estimated illumination maps rather than rendered images.

Neural Illumination: Lighting Prediction for Indoor Environments

  • S. SongT. Funkhouser
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
This paper proposes "Neural Illumination," a new approach that decomposes illumination prediction into several simpler differentiable sub-tasks: 1) geometry estimation, 2) scene completion, and 3) LDR-to-HDR estimation.

Deep Parametric Indoor Lighting Estimation

It is demonstrated, via quantitative and qualitative evaluations, that the representation and training scheme lead to more accurate results compared to previous work, while allowing for more realistic 3D object compositing with spatially-varying lighting.

Shape and Illumination from Shading using the Generic Viewpoint Assumption

A novel linearized Spherical Harmonics (SH) shading model is proposed which enables a computationally efficient form of the GVA term and a model whose unknowns are shape and SH illumination is built, requiring fewer assumptions than competing methods.

Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination

A deep learning model is proposed that estimates a 3D volumetric RGBA model of a scene, including content outside the observed field of view, and then uses standard volume rendering to estimate the incident illumination at any 3D location within that volume.

Neural Inverse Rendering of an Indoor Scene From a Single Image

This work proposes the first learning based approach that jointly estimates albedo, normals, and lighting of an indoor scene from a single image, and uses physically-based rendering to create a large-scale synthetic dataset, named SUNCG-PBR, which is a significant improvement over prior datasets.

Deep Outdoor Illumination Estimation

It is demonstrated that the approach allows the recovery of plausible illumination conditions and enables photorealistic virtual object insertion from a single image and significantly outperforms previous solutions to this problem.

Estimating the Natural Illumination Conditions from a Single Outdoor Image

Given a single outdoor image, a method for estimating the likely illumination conditions of the scene is presented and it is shown how to realistically insert synthetic 3-D objects into the scene, and how to transfer appearance across images while keeping the illumination consistent.

Learning to predict indoor illumination from a single image

An end-to-end deep neural network is trained that directly regresses a limited field-of-view photo to HDR illumination, without strong assumptions on scene geometry, material properties, or lighting, which allows to automatically recover high-quality HDR illumination estimates that significantly outperform previous state- of-the-art methods.
...