Learning High Dynamic Range from Outdoor Panoramas

@article{Zhang2017LearningHD,
  title={Learning High Dynamic Range from Outdoor Panoramas},
  author={Jinsong Zhang and Jean-François Lalonde},
  journal={2017 IEEE International Conference on Computer Vision (ICCV)},
  year={2017},
  pages={4529-4538}
}
Outdoor lighting has extremely high dynamic range. This makes the process of capturing outdoor environment maps notoriously challenging since special equipment must be used. In this work, we propose an alternative approach. We first capture lighting with a regular, LDR omnidirectional camera, and aim to recover the HDR after the fact via a novel, learning-based inverse tonemapping method. We propose a deep autoencoder framework which regresses linear, high dynamic range data from non-linear… 

High-Dynamic-Range Lighting Estimation From Face Portraits

It is shown that the predicted HDR environment maps can be used as accurate illumination sources for scene renderings, with potential applications in 3D object insertion for augmented reality.

Casual Indoor HDR Radiance Capture from Omnidirectional Images

It is shown that the HDR images produced by PanoHDR-NeRF can synthesize correct lighting effects, enabling the augmentation of indoor scenes with synthetic objects that are lit correctly, and predicts plausible radiance from any scene point.

DeepLight: Learning Illumination for Unconstrained Mobile Mixed Reality

The authors' inference runs at interactive frame rates on a mobile device, enabling realistic rendering of virtual objects into real scenes for mobile mixed reality and improves the realism of rendered objects compared to the state-of-the art methods for both indoor and outdoor scenes.

Spatially-Varying Outdoor Lighting Estimation from Intrinsics

This work trains a deep neural network to regress intrinsic cues with physically-based constraints and use them to conduct global and local lightings estimation, and introduces the SOLID-Img dataset, a neural network for spatially- varying outdoor lighting estimation from a single outdoor image for any 2D pixel location.

NeuLighting: Neural Lighting for Free Viewpoint Outdoor Scene Relighting with Unconstrained Photo Collections

The high-fidelity renderings under novel views and illumination prove the superiority of the NeuLighting method against state-of-the-art relighting solutions.

Dual attention autoencoder for all-weather outdoor lighting estimation

A novel dual attention autoencoder with two independent branches to compress the sun and sky lighting information from an input HDR panorama, respectively enables more accurate lighting estimation and shows its superiority over the state of the arts.

Deep Sky Modeling for Single Image Outdoor Lighting Estimation

This work proposes a data-driven learned sky model, which is used for outdoor lighting estimation from a single image, and shows that it can be used to recover plausible illumination, leading to visually pleasant virtual object insertions.

MergeNet: Single High Dynamic Range Image Reconstruction Method

The improved deep merger network (MergeNet) is proposed to reconstruct an HDR image from a single filtered low dynamic range (FLDR) image with the feature extraction ability of deep learning methods and the band transmission characteristics of optical filters.

HDR image reconstruction from a single exposure using deep CNNs

This paper addresses the problem of predicting information that have been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure, and proposes a deep convolutional neural network (CNN) that is specifically designed taking into account the challenges in predicting HDR values.

LHDR: HDR Reconstruction for Legacy Content using a Lightweight DNN

This work proposes a lightweight DNN-based method trained to tackle legacy SDR and shows that it reached appealing performance with minimal computational cost compared with others.
...

References

SHOWING 1-10 OF 39 REFERENCES

Deep Outdoor Illumination Estimation

It is demonstrated that the approach allows the recovery of plausible illumination conditions and enables photorealistic virtual object insertion from a single image and significantly outperforms previous solutions to this problem.

Direct HDR capture of the sun and sky

An adaptive exposure range adjustment technique for minimizing the number of exposures necessary for capturing the extreme dynamic range of natural illumination environments that include the sun and sky, which has presented a challenge for traditional high dynamic range photography processes.

A versatile HDR video production system

This work presents an optical architecture for HDR imaging that allows simultaneous capture of high, medium, and low-exposure images on three sensors at high fidelity with efficient use of the available light and presents an HDR merging algorithm to complement this architecture.

Estimating the Natural Illumination Conditions from a Single Outdoor Image

Given a single outdoor image, a method for estimating the likely illumination conditions of the scene is presented and it is shown how to realistically insert synthetic 3-D objects into the scene, and how to transfer appearance across images while keeping the illumination consistent.

Interactive HDR Environment Map Capturing on Mobile Devices

This paper presents a novel method for capturing environmental illumination by a mobile device that requires only a consumer mobile phone and the result can be instantly used for rendering or material estimation.

Depth Map Prediction from a Single Image using a Multi-Scale Deep Network

This paper employs two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally, and applies a scale-invariant error to help measure depth relations rather than scale.

Context Encoders: Feature Learning by Inpainting

It is found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures, and can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.

Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks

A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion.

Scribbler: Controlling Deep Image Synthesis with Sketch and Color

A deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces is proposed and demonstrates a sketch based image synthesis system which allows users to scribble over the sketch to indicate preferred color for objects.

Content-adaptive inverse tone mapping

This paper provides an ”histogram-based” method for inverse tone mapping that contains a content-adaptive inverse Tone mapping operator, which has different responses with different scene characteristics and scene classification is included in this algorithm to select the environment parameters.