• Corpus ID: 247939726

Clean Images are Hard to Reblur: Exploiting the Ill-Posed Inverse Task for Dynamic Scene Deblurring

  title={Clean Images are Hard to Reblur: Exploiting the Ill-Posed Inverse Task for Dynamic Scene Deblurring},
  author={Seungjun Nah and Sanghyun Son and Jaerin Lee and Kyoung Mu Lee},
The goal of dynamic scene deblurring is to remove the motion blur in a given image. Typical learning-based approaches implement their solutions by minimizing the L1 or L2 distance between the output and the reference sharp image. Recent attempts adopt visual recognition features in training to improve the perceptual quality. However, those features are primarily designed to capture high-level con-texts rather than low-level structures such as blurriness. Instead, we propose a more direct way to… 

Figures and Tables from this paper

Attentive Fine-Grained Structured Sparsity for Image Restoration

This work proposes a novel pruning method that determines the pruning ratio for N : M structured sparsity at each layer of an image restoration network, which outperforms previous pruning methods significantly.



Reblur2Deblur: Deblurring videos via self-supervised learning

This work fine-tunes existing deblurring neural networks in a self-supervised fashion by enforcing that the output, when blurred based on the optical flow between subsequent frames, matches the input blurry image.

Deblurring by Realistic Blurring

This paper proposes a new method which combines two GAN models, i.e., a learning-to-Blur GAN (BGAN) and learning- to-DeBlurGAN (DBGAN), in order to learn a better model for image deblurring by primarily learning how to blur images.

Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring

This work proposes a multi-scale convolutional neural network that restores sharp images in an end-to-end manner where blur is caused by various sources and presents a new large-scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera.

Learning a Discriminative Prior for Blind Image Deblurring

This work forms the image prior as a binary classifier which can be achieved by a deep convolutional neural network (CNN) and is able to distinguish whether an input image is clear or not.

Efficient Dynamic Scene Deblurring Using Spatially Variant Deconvolution Network With Optical Flow Guided Training

  • Yuan YuanWei SuDandan Ma
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
This paper designs an effective and real-time deblurring network using modulated deformable convolutions, which can adjust receptive fields adaptively according to the blur features, and builds a light-weighted backbone for image restoration problem.

Non-uniform Blind Deblurring by Reblurring

An approach for blind image deblurring, which handles non-uniform blurs, which exceeds the performance of state-of-the-art CNN-based blind-deblurring by a significant margin, without the need for any training data.

From Motion Blur to Motion Flow: A Deep Learning Solution for Removing Heterogeneous Motion Blur

This work directly estimates the motion flow from the blurred image through a fully-convolutional deep neural network (FCN) and recovers the unblurred image from the estimated motion flow and is the first universal end-to-end mapping from the blur image to the dense motion flow.

Motion Deblurring in the Wild

A deep learning approach to remove motion blur from a single image captured in the wild, i.e., in an uncontrolled setting, is proposed and both a novel convolutional neural network architecture and a dataset for blurry images with ground truth are designed.

Test-Time Fast Adaptation for Dynamic Scene Deblurring via Meta-Auxiliary Learning

This work proposes a novel self-supervised auxiliary reconstruction task that shares a portion of the network with the primary deblurring task and proposes a meta-auxiliary training scheme to further optimize the pretrained model as a base learner, which is applicable for fast adaptation at test time.

Perceptual Losses for Real-Time Style Transfer and Super-Resolution

This work considers image transformation problems, and proposes the use of perceptual loss functions for training feed-forward networks for image transformation tasks, and shows results on image style transfer, where aFeed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time.