End-to-End Differentiable Learning to HDR Image Synthesis for Multi-exposure Images

@inproceedings{Kim2020EndtoEndDL,
  title={End-to-End Differentiable Learning to HDR Image Synthesis for Multi-exposure Images},
  author={Jung Hee Kim and Siyeong Lee and So Yeon Jo and Suk‐Ju Kang},
  booktitle={AAAI Conference on Artificial Intelligence},
  year={2020}
}
Recently, high dynamic range (HDR) image reconstruction based on the multiple exposure stack from a given single exposure utilizes a deep learning framework to generate high-quality HDR images. These conventional networks focus on the exposure transfer task to reconstruct the multi-exposure stack. Therefore, they often fail to fuse the multi-exposure stack into a perceptually pleasant HDR image as the inversion artifacts occur. We tackle the problem in stack reconstruction-based methods by… 

Figures and Tables from this paper

KUNet: Imaging Knowledge-Inspired Single HDR Image Reconstruction

A basic knowledge-inspired block (KIB) including three subnetworks corresponding to the three procedures in this HDR imaging process, dubbed as Knowledge-inspired UNet (KUNet), which achieves superior performance compared with the state-of-the-art methods.

A Mixed Quantization Network for Efficient Mobile Inverse Tone Mapping

A well performing but computationally efficient mixed quantization network (MQN) which can perform single image ITM on mobile platforms is proposed and the effect of using different attention mechanisms, quantization schemes and loss functions on the performance of MQN in ITM tasks is explored.

DenSE SwinHDR: SDRTV to HDRTV Conversion Using Densely Connected Swin Transformer With Squeeze and Excitation Module

This paper divides SDRTV-to-HDRTV conversion problem into global and local mapping problems, and introduces a new Vision Transformer architecture denoted as DenSE-SwinHDR, which outperforms in terms of objective scores and visual quality compared to the state-of-the-art methods.

DeepHS-HDRVideo: Deep High Speed High Dynamic Range Video Reconstruction

This work proposes to use video frame interpolation for HDR video reconstruction, and presents the first method to generate high FPS HDR videos by recursively interpolating the intermediate frames.

HSVNet: Reconstructing HDR Image from a Single Exposure LDR Image with CNN

The proposed method, HSVNet, is a deep learning architecture using a Convolutional Neural Networks (CNN) based U-net that uses the HSV color space that enables the network to identify saturated regions and adaptively focus on crucial components.

HDR-NeRF: High Dynamic Range Neural Radiance Fields

Experimental results on synthetic and real-world scenes validate that the High Dynamic Range Neural Radiance Fields (HDR-NeRF) can not only accurately control the exposures of synthesized views but also render views with a high dynamic range.

Deep Learning for HDR Imaging: State-of-the-Art and Future Trends

  • Lin WangKuk-Jin Yoon
  • Computer Science
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2022
This study hierarchically and structurally group existing deep HDR imaging methods into five categories based on a number/domain of input exposures, number of learning tasks, novel sensor data, novel learning strategies, and applications, and provides a constructive discussion on each category regarding its potential and challenges.

Hybrid Saturation Restoration for LDR Images of HDR Scenes

The saturated regions of the LDR image are restored by fusing model-based and data-driven approaches and can be embedded in any smart phones or digital cameras to produce an information-enriched LDR picture.

References

SHOWING 1-10 OF 44 REFERENCES

Single-Image HDR Reconstruction by Learning to Reverse the Camera Pipeline

This work model the HDR-to-LDR image formation pipeline as the dynamic range clipping, non-linear mapping from a camera response function, and quantization, and proposes to learn three specialized CNNs to reverse these steps.

HDR image reconstruction from a single exposure using deep CNNs

This paper addresses the problem of predicting information that have been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure, and proposes a deep convolutional neural network (CNN) that is specifically designed taking into account the challenges in predicting HDR values.

Deep reverse tone mapping

The first deep-learning-based approach for fully automatic inference using convolutional neural networks is proposed, which can reproduce not only natural tones without introducing visible noise but also the colors of saturated pixels.

Deep Chain HDRI: Reconstructing a High Dynamic Range Image from a Single Low Dynamic Range Image

A novel deep neural network model is proposed that reconstructs an HDR image from a single low dynamic range (LDR) image based on a convolutional neural network composed of dilated Convolutional layers and infers LDR images with various exposures and illumination from asingle LDR image of the same scene.

ExpandNet: A Deep Convolutional Neural Network for High Dynamic Range Expansion from Low Dynamic Range Content

This paper presents a method for generating HDR content from LDR content based on deep Convolutional Neural Networks (CNNs) termed ExpandNet, which accepts LDR images as input and generates images with an expanded range in an end‐to‐end fashion.

Deep Recursive HDRI: Inverse Tone Mapping Using Generative Adversarial Networks

The proposed method is the first framework to create high dynamic range images based on the estimated multi-exposure stack using the conditional generative adversarial network structure and is significantly similar to the ground truth than other state-of-the-art algorithms.

JSI-GAN: GAN-Based Joint Super-Resolution and Inverse Tone-Mapping with Pixel-Wise Task-Specific Filters for UHD HDR Video

This paper takes a divide-and-conquer approach in designing a novel GAN-based joint SR-ITM network, called JSI-GAN, which is composed of three task-specific subnets: an image reconstruction subnet, a detail restoration subnet and a local contrast enhancement (LCE) subnet.

Attention-Guided Network for Ghost-Free High Dynamic Range Imaging

The proposed AHDRNet is a non-flow-based method, which can also avoid the artifacts generated by optical-flow estimation error, and can achieve state-of-the-art quantitative and qualitative results.

Adaptive dualISO HDR reconstruction

Experimental results show that the proposed filter de-noises the noisy image carefully while well preserving the important image features such as edges and corners, outperforming previous methods.

Zoom to Learn, Learn to Zoom

This paper shows that when applying machine learning to digital zoom, it is beneficial to operate on real, RAW sensor data, and shows how to obtain such ground-truth data via optical zoom and contribute a dataset, SR-RAW, for real-world computational zoom.