HDRVideo-GAN: deep generative HDR video reconstruction

@article{Anand2021HDRVideoGANDG,
  title={HDRVideo-GAN: deep generative HDR video reconstruction},
  author={Mrinal Anand and Nidhin Harilal and Chandan Kumar and Shanmuganathan Raman},
  journal={Proceedings of the Twelfth Indian Conference on Computer Vision, Graphics and Image Processing},
  year={2021}
}
  • Mrinal Anand, Nidhin Harilal, S. Raman
  • Published 22 October 2021
  • Computer Science
  • Proceedings of the Twelfth Indian Conference on Computer Vision, Graphics and Image Processing
High dynamic range (HDR) videos provide a more visually realistic experience than the standard low dynamic range (LDR) videos. Despite having significant progress in HDR imaging, it is still a challenging task to capture high-quality HDR video with a conventional off-the-shelf camera. Existing approaches rely entirely on using dense optical flow between the neighboring LDR sequences to reconstruct an HDR frame. However, they lead to inconsistencies in color and exposure over time when applied… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 63 REFERENCES
Deep HDR Video from Sequences with Alternating Exposures
TLDR
This paper uses two sequential convolutional neural networks to model the entire HDR video reconstruction process and produces high‐quality HDR videos and is an order of magnitude faster than the state‐of‐the‐art techniques for sequences with two and three alternating exposures.
Patch-based high dynamic range video
TLDR
This work proposes a new approach for HDR reconstruction from alternating exposure video sequences that combines the advantages of optical flow and recently introduced patch-based synthesis for HDR images and results in a novel reconstruction algorithm that can produce high-quality HDR videos with a standard camera.
HDR image reconstruction from a single exposure using deep CNNs
TLDR
This paper addresses the problem of predicting information that have been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure, and proposes a deep convolutional neural network (CNN) that is specifically designed taking into account the challenges in predicting HDR values.
Deep high dynamic range imaging of dynamic scenes
TLDR
A convolutional neural network is used as the learning model and three different system architectures are compared to model the HDR merge process to demonstrate the performance of the system by producing high-quality HDR images from a set of three LDR images.
Unified HDR reconstruction from raw CFA data
TLDR
This work presents a unifying approach, performing HDR assembly directly from raw sensor data in a single processing operation, which includes a spatially adaptive HDR reconstruction based on fitting local polynomial approximations to observed sensor data, using a localized likelihood approach incorporating spatially varying sensor noise.
A unified framework for multi-sensor HDR video reconstruction
Denoising vs. deblurring: HDR imaging techniques using moving cameras
TLDR
An approach that combines optical flow and image denoising algorithms for HDR imaging, which enables capturing sharp HDR images using handheld cameras for complex scenes with large depth variation.
ExpandNet: A Deep Convolutional Neural Network for High Dynamic Range Expansion from Low Dynamic Range Content
TLDR
This paper presents a method for generating HDR content from LDR content based on deep Convolutional Neural Networks (CNNs) termed ExpandNet, which accepts LDR images as input and generates images with an expanded range in an end‐to‐end fashion.
Learning High Dynamic Range from Outdoor Panoramas
TLDR
This work first captures lighting with a regular, LDR omnidirectional camera, and aims to recover the HDR after the fact via a novel, learning-based inverse tonemapping method which regresses linear, high dynamic range data from non-linear, saturated, low dynamic range panoramas.
Robust patch-based hdr reconstruction of dynamic scenes
TLDR
This paper proposes a new approach to HDR reconstruction that draws information from all the exposures but is more robust to camera/scene motion than previous techniques and presents results that show considerable improvement over previous approaches.
...
1
2
3
4
5
...