• Corpus ID: 229348751

HDR Denoising and Deblurring by Learning Spatio-temporal Distortion Models

  title={HDR Denoising and Deblurring by Learning Spatio-temporal Distortion Models},
  author={Ugur Çogalan and Mojtaba Bemana and Karol Myszkowski and Hans-Peter Seidel and Tobias Ritschel},
We seek to reconstruct sharp and noise-free high-dynamic range (HDR) video from a dual-exposure sensor that records different low-dynamic range (LDR) information in different pixel columns: Odd columns provide low-exposure, sharp, but noisy information; even columns complement this with less noisy, high-exposure, but motion-blurred data. Previous LDR work learns to deblur and denoise ( D ISTORTED → C LEAN ) supervised by pairs of C LEAN and D ISTORTED images. Regrettably, capturing D ISTORTED… 
2 Citations

Figures and Tables from this paper

Self-Supervised Super-Resolution for Multi-Exposure Push-Frame Satellites
This work proposes a super-resolution method that can handle the signal-dependent noise in the inputs, process sequences of any length, and be robust to inaccuracies in the exposure times, and can be trained end-to-end with self-supervision, which makes it especially suited to handle real data.
Deep Learning for HDR Imaging: State-of-the-Art and Future Trends
  • Lin Wang, Kuk-Jin Yoon
  • Computer Science
    IEEE transactions on pattern analysis and machine intelligence
  • 2021
Overall, this paper hierarchically and structurally group existing deep HDR imaging methods into five categories based on the number/domain of input exposures in HDR Imaging, the number of learning tasks in HDR imaging, HDR imaging using the novel sensor data, HDR Imaging using novel learning strategies, and the applications.


Deep reverse tone mapping
The first deep-learning-based approach for fully automatic inference using convolutional neural networks is proposed, which can reproduce not only natural tones without introducing visible noise but also the colors of saturated pixels.
Deep Joint Deinterlacing and Denoising for Single Shot Dual-ISO HDR Reconstruction
A joint multi-exposure frame deinterlacing and denoising algorithm powered by deep convolutional neural networks (DCNN) that significantly exceeds the state-of-the-art in single-image HDR reconstruction algorithms.
Reconstructing Interlaced High-Dynamic-Range Video Using Joint Learning
This paper proposes a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures using joint dictionary learning via sparse coding and adopts multiscale homography flow to temporal sequences for denoisation.
Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring
This work proposes a multi-scale convolutional neural network that restores sharp images in an end-to-end manner where blur is caused by various sources and presents a new large-scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera.
LSD2 - Joint Denoising and Deblurring of Short and Long Exposure Images with Convolutional Neural Networks
A novel approach based on capturing pairs of short and long exposure images in rapid succession and fusing them into a single highquality photograph is proposed and enables exposure fusion even in the presence of motion blur.
Learning to Extract Flawless Slow Motion From Blurry Videos
A data-driven approach, where the training data is captured with a high frame rate camera and blurry images are simulated through an averaging process, which enables further increase in frame rate without retraining the network, by applying InterpNet recursively between pairs of sharp frames.
Discriminative Non-blind Deblurring
This work proposes a discriminative model cascade for non-blind deblurring, which consists of a Gaussian CRF at each stage, based on the recently introduced regression tree fields, and trains the model by loss minimization and uses synthetically generated blur kernels to generate training data.
Spatio-Temporal Filter Adaptive Network for Video Deblurring
The proposed Spatio-Temporal Filter Adaptive Network (STFAN) takes both blurry and restored images of the previous frame as well as blurry image of the current frame as input, and dynamically generates the spatially adaptive filters for the alignment and deblurring.
Learning to See in the Dark
A pipeline for processing low-light images is developed, based on end-to-end training of a fully-convolutional network that operates directly on raw sensor data and replaces much of the traditional image processing pipeline, which tends to perform poorly on such data.
Deep Video Deblurring for Hand-Held Cameras
This work introduces a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames, and shows that the features learned extend todeblurring motion blur that arises due to camera shake in a wide range of videos.