DAVANet: Stereo Deblurring With View Aggregation

@article{Zhou2019DAVANetSD,
  title={DAVANet: Stereo Deblurring With View Aggregation},
  author={Shangchen Zhou and Jiawei Zhang and Wangmeng Zuo and Haozhe Xie and Jinshan Pan and Jimmy S. J. Ren},
  journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2019},
  pages={10988-10997}
}
Nowadays stereo cameras are more commonly adopted in emerging devices such as dual-lens smartphones and unmanned aerial vehicles. However, they also suffer from blurry images in dynamic scenes which leads to visual discomfort and hampers further image processing. Previous works have succeeded in monocular deblurring, yet there are few studies on deblurring for stereoscopic images. By exploiting the two-view nature of stereo images, we propose a novel stereo image deblurring network with Depth… 

Figures and Tables from this paper

UNCONSTRAINED DYNAMIC SCENE DEBLURRING FOR DUAL-LENS CAMERAS

TLDR
This work tackles the un-addressed problem of unconstrained dual-lens (DL) dynamic scene deblurring by an image adaptive multi-scale based coherent fusion approach, and addresses the root cause of view-inconsistency in the generic DLdeblurring network using a coherent fusion module.

Towards Stereoscopic Video Deblurring Using Deep Convolutional Networks

TLDR
A novel stereoscopic video deblurring model considering the consecutive left and right video frames is presented, which can effectively deblur the blurry stereoscopic videos.

Deep Dynamic Scene Deblurring for Unconstrained Dual-Lens Cameras

TLDR
This paper addresses an inherent problem in unconstrained DL deblurring that disrupts scene-consistent disparities by introducing a memory-efficient Adaptive Scale-space Approach and proposes a module to address the Space-variant and Image-dependent nature of dynamic scene blur.

Context Module Based Multi-patch Hierarchical Network for Motion Deblurring

TLDR
A novel end-to-end network structure based on Deep Hierarchical Multi-patch network architecture integrated with Context Module and additional ResBlocks in order to tackle deblurring problem is proposed and demonstrates the effectiveness of Context Module in the task of single image blur removal.

Learning Dual-Pixel Alignment for Defocus Deblurring

TLDR
Experimental results on real-world datasets show that the proposed DPANet is notably superior to state-of-the-art deblurring methods in reducing defocus blur while recovering visually plausible sharp structures and textures.

Depth-Guided Dense Dynamic Filtering Network for Bokeh Effect Rendering

TLDR
The proposed network is composed of an efficient densely connected encoder-decoder backbone structure with a pyramid pooling module that leverages the task-specific efficacy of joint intensity estimation and dynamic filter synthesis for the spatially-aware blurring process.

LEDNet: Joint Low-light Enhancement and Deblurring in the Dark

TLDR
A novel data synthesis pipeline is introduced that models realistic low-light blurring degradations, especially for blurs in saturated regions, e.g., light streaks, that often appear in the night images.

Spatio-Temporal Filter Adaptive Network for Video Deblurring

TLDR
The proposed Spatio-Temporal Filter Adaptive Network (STFAN) takes both blurry and restored images of the previous frame as well as blurry image of the current frame as input, and dynamically generates the spatially adaptive filters for the alignment and deblurring.

Parallax Attention for Unsupervised Stereo Correspondence Learning

TLDR
A generic parallax-attention mechanism (PAM) to capture stereo correspondence regardless of disparity variations is proposed and Experimental results show that the PAM is generic and can effectively learn stereo correspondence under large disparity variations in an unsupervised manner.

Attention Enhanced Multi-patch Deformable Network for Image Deblurring

TLDR
An enhanced network based on the four-layer multi-path hierarchy structure which divides image into multi-patches instead of down-sampling or other lossy way is proposed, which has improved nearly 0.4dB PSNR on the quantitative performance on GoPro testing data set, and also achieved clearer visual effect on several scenes.

References

SHOWING 1-10 OF 44 REFERENCES

Simultaneous Stereo Video Deblurring and Scene Flow Estimation

TLDR
This paper proposes a novel approach to deblurring from stereo videos that exploits the piece-wise planar assumption about the scene and leverage the scene flow information todeblur the image and achieves significant improvement in flow estimation and removing the blur effect over the state-of theart methods.

Joint Estimation of Camera Pose, Depth, Deblurring, and Super-Resolution from a Blurred Image Sequence

TLDR
This paper proposes a pioneering unified framework that solves four problems simultaneously, namely, dense depth reconstruction, camera pose estimation, super-resolution, and deblurring, by reflecting a physical imaging process and solving the cost minimization problem using an alternating optimization technique.

Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring

TLDR
This work proposes a multi-scale convolutional neural network that restores sharp images in an end-to-end manner where blur is caused by various sources and presents a new large-scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera.

Stereo Video Deblurring

TLDR
This paper is the first to show how the availability of stereo video can aid the challenging video deblurring task, and leverages 3D scene flow, which can be estimated robustly even under adverse conditions.

Joint Depth Estimation and Camera Shake Removal from Single Blurry Image

TLDR
This work proposes to jointly estimate scene depth and remove non-uniform blur caused by camera motion by exploiting their underlying geometric relationships, with only single blurry image as input, and presents a unified layer-based model for depth-involved deblurring.

Deep Video Deblurring for Hand-Held Cameras

TLDR
This work introduces a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames, and shows that the features learned extend todeblurring motion blur that arises due to camera shake in a wide range of videos.

Joint Blind Motion Deblurring and Depth Estimation of Light Field

TLDR
A novel algorithm to estimate all blur model variables jointly, including latent sub-aperture image, camera motion, and scene depth from the blurred 4D light field, achieves high quality light field deblurring and depth estimation simultaneously under arbitrary 6-DOF camera motion and unconstrained scene depth.

Online Video Deblurring via Dynamic Temporal Blending Network

TLDR
An online (sequential) video deblurring method based on a spatio-temporal recurrent network that allows for realtime performance and introduces a novel architecture which extends the receptive field while keeping the overall size of the network small to enable fast execution.

Dynamic Scene Deblurring Using Spatially Variant Recurrent Neural Networks

TLDR
Quantitative and qualitative evaluations on public datasets demonstrate that the proposed method performs favorably against state-of-the-art algorithms in terms of accuracy, speed, and model size.

Non-uniform Deblurring for Shaken Images

TLDR
A new parametrized geometric model of the blurring process in terms of the rotational motion of the camera during exposure is proposed, able to capture non-uniform blur in an image due to camera shake using a single global descriptor, and can be substituted into existing deblurring algorithms with only small modifications.