Iterative Filter Adaptive Network for Single Image Defocus Deblurring

@article{Lee2021IterativeFA,
  title={Iterative Filter Adaptive Network for Single Image Defocus Deblurring},
  author={Junyong Lee and Hyeongseok Son and Jaesung Rim and Sunghyun Cho and Seungyong Lee},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={2034-2042}
}
We propose a novel end-to-end learning-based approach for single image defocus deblurring. The proposed approach is equipped with a novel Iterative Filter Adaptive Network (IFAN) that is specifically designed to handle spatially-varying and large defocus blur. For adaptively handling spatially-varying blur, IFAN predicts pixel-wise deblurring filters, which are applied to defocused features of an input image to generate deblurred features. For effectively managing large blur, IFAN models… 

Single-image Defocus Deblurring by Integration of Defocus Map Prediction Tracing the Inverse Problem Computation

Experimental results show that the proposed simple but effective network with spatial modulation based on the defocus map can achieve better quantitative and qualitative evaluation performance than the existing state-of-the-art methods on the commonly used public test datasets.

Defocus Image Deblurring Network With Defocus Map Estimation as Auxiliary Task

This paper proposes a new network architecture called Defocus Image Deblurring Auxiliary Learning Net (DID-ANet), which is specifically designed for single image defocus deblurring by using defocus map estimation as auxiliary task to improve thedeblurring result.

Improving Single-Image Defocus Deblurring: How Dual-Pixel Images Help Through Multi-Task Learning

This work proposes a single-image deblurring network that incorporates the two sub-aperture views of a scene in a single capture into a multitask framework and shows that jointly learning to predict the two DP views from a single blurry input image improves the network’s ability to learn to deblur the image.

Learning to Deblur using Light Field Generated and Real Defocus Images

A novel deep defocus deblurring network that leverages the strength and overcomes the shortcoming of light fields is proposed that is proved to be highly effective and able to achieve the state-of-the-art performance both quantitatively and qualitatively on multiple test sets.

Learning to Reduce Defocus Blur by Realistically Modeling Dual-Pixel Data

This work addresses the data capture bottleneck by proposing a procedure to generate realistic DP data synthetically and introduces a recurrent convolutional network (RCN) architecture that improves deblurring results and is suitable for use with single-frame and multi-frame data captured by DP sensors.

Dynamic Multi-Scale Network for Dual-Pixel Images Defocus Deblurring with Transformer

A dynamic multi-scale network, named DMT-Net, for dual-pixel images defocus deblurring, in which the vision trans-former improves the performance ceiling of CNN, and the inductive bias of CNN enables transformer to extract more robust features without relying on a large amount of data.

Deblur-NeRF: Neural Radiance Fields from Blurry Images

  • Li MaXiaoyu Li P. Sander
  • Computer Science
    2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2022
Deblur-NeRF is proposed, the first method that can recover a sharp NeRF from blurry input and outperforms several baselines, and can be used on both camera motion blur and defocus blur: the two most common types of blur in real scenes.

PDRF: Progressively Deblurring Radiance Field for Fast and Robust Scene Reconstruction from Blurry Images

Progressively Deblurring Radiance Field, a progressively deblurring scheme in radiance modeling, accurately models blur by incorporating 3D scene context and uses an efficient importance sampling scheme, which results in fast scene optimization.

Deep Residual Fourier Transformation for Single Image Deblurring

A Residual Fast Fourier Transform with Convolution Block (Res FFT-Conv Block), capable of capturing both long-term and short-term interactions, while integrating both lowand high-frequency residual information, is presented.

Generative Adaptive Convolutions for Real-World Noisy Image Denoising

This work proposes a novel flexible and adaptive denoising network, coined as FADNet, equipped with a plane dynamic filter module, which generates weight filters with flexibility that can adapt to the specific input and thereby impedes the FAD net from overfitting to the training data.

References

SHOWING 1-10 OF 38 REFERENCES

Defocus Deblurring Using Dual-Pixel Data

An effective defocus deblurring method that exploits data available on dual-pixel (DP) sensors found on most modern cameras is proposed that produces results that are significantly better than conventional single image methods in terms of both quantitative and perceptual metrics.

Edge-Based Defocus Blur Estimation With Adaptive Scale Selection

A new edge-based method for spatially varying defocus blur estimation using a single image based on reblurred gradient magnitudes is presented and a fast guided filter is used to propagate the sparse blur map through the whole image.

Non-Parametric Blur Map Regression for Depth of Field Extension

This paper presents a blind deblurring pipeline able to restore real camera systems by slightly extending their DOF and recovering sharpness in regions slightly out of focus by relying first on the estimation of the spatially varying defocus blur.

Spatio-Temporal Filter Adaptive Network for Video Deblurring

The proposed Spatio-Temporal Filter Adaptive Network (STFAN) takes both blurry and restored images of the previous frame as well as blurry image of the current frame as input, and dynamically generates the spatially adaptive filters for the alignment and deblurring.

Deep Defocus Map Estimation Using Domain Adaptation

The first end-to-end convolutional neural network (CNN) architecture, Defocus Map Estimation Network (DMENet), for spatially varying defocus map estimation is proposed, which uses domain adaptation that transfers the features of real defocused photos into those of synthetically blurred ones.

Real-World Blur Dataset for Learning and Benchmarking Deblurring Algorithms.

This work presents a large-scale dataset of real-world blurred images and ground truth sharp images for learning and benchmarking single image deblurring methods, and develops a postprocessing method to produce high-quality ground truth images.

A Unified Approach of Multi-scale Deep and Hand-Crafted Features for Defocus Estimation

This paper systematically analyzes the effectiveness of different features, and shows how each feature can compensate for the weaknesses of other features when they are concatenated.

Reblur2Deblur: Deblurring videos via self-supervised learning

This work fine-tunes existing deblurring neural networks in a self-supervised fashion by enforcing that the output, when blurred based on the optical flow between subsequent frames, matches the input blurry image.

Modeling Defocus-Disparity in Dual-Pixel Sensors

A new parametric point spread function is proposed to model the defocus-disparity that occurs on DP sensors and leverage the symmetry property of the DP blur kernels at each pixel to formulate an unsupervised loss function that does not require ground truth depth.

Dynamic Scene Deblurring Using Spatially Variant Recurrent Neural Networks

Quantitative and qualitative evaluations on public datasets demonstrate that the proposed method performs favorably against state-of-the-art algorithms in terms of accuracy, speed, and model size.