Dual-Camera Super-Resolution with Aligned Attention Modules

@article{Wang2021DualCameraSW,
  title={Dual-Camera Super-Resolution with Aligned Attention Modules},
  author={Tengfei Wang and Jiaxin Xie and Wenxiu Sun and Qiong Yan and Qifeng Chen},
  journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021},
  pages={1981-1990}
}
We present a novel approach to reference-based super-resolution (RefSR) with the focus on dual-camera super-resolution (DCSR), which utilizes reference images for high-quality and high-fidelity results. Our proposed method generalizes the standard patch-based feature matching with spatial alignment operations. We further explore the dual-camera super-resolution that is one promising application of RefSR, and build a dataset that consists of 146 image pairs from the main and telephoto cameras in… 

Reference-based Video Super-Resolution Using Multi-Camera Video Triplets

The first RefVSR network that re-currently aligns and propagates temporal reference features fused with features extracted from low-resolution frames is introduced, and the result shows the state-of-the-art performance in 4 × super-resolution.

Reference-based Image and Video Super-Resolution via C2-Matching

C 2 -Matching is proposed, which performs explicit robust matching crossing transformation and resolution and shows great generalizability on WR-SR dataset as well as robustness across large scale and rotation transformations.

RRSR: Reciprocal Reference-based Image Super-Resolution with Progressive Feature Alignment and Selection

This work proposes a reciprocal learning framework that can appropriately leverage such a fact to reinforce the learning of a RefSR network and empirically shows that multiple recent state-of-the-art RefSR models can be consistently improved with this reciprocal learning paradigm.

Degradation-agnostic Correspondence from Resolution-asymmetric Stereo

This paper finds that, although a stereo matching network trained with the photometric loss is not optimal, its feature extractor can produce degradation-agnostic and matching-specific features that can be utilized to formulate a feature-metric loss to avoid thePhotometric inconsistency.

Dual Camera Based High Spatio-Temporal Resolution Video Generation For Wide Area Surveillance

An end-to-end trainable deep network that performs optical flow (OF) estimation and frame reconstruction by combining inputs from both video feeds is proposed that provides significant improvement over existing video frame interpolation and RefSR techniques in terms of PSNR and SSIM metrics.

Self-Supervised Learning for Real-World Super-Resolution from Dual Zoomed Observations (Supplementary Material)

Noise in real-world images is common, but complex and various. In order to bridge the gap between auxiliary-LR and LR as much as possible, we need to add noise to the output of the auxiliary-LR

NeuriCam: Video Super-Resolution and Colorization Using Key Frames

A key-frame video super-resolution and colorization based system, to achieve low-power video capture from dual-mode IOT cameras and introduces an attention feature mechanism that assigns different weights to different features, based on the correlation between the feature map and contents of the input frame at each spatial location.

Restorable Image Operators with Quasi-Invertible Networks

A quasi-invertible model that learns common image processing operators in a restorable fashion is proposed that can generate visually pleasing results with the original content embedded and can be easily applied to practical applications such as restorable human face retouching and highlight preserved exposure adjustment.

DCMS: Motion Forecasting with Dual Consistency and Multi-Pseudo-Target Supervision

A novel framework for motion forecasting with Dual Consistency Constraints and Multi-Pseudo-Target supervision, which significantly outperforms the state-of-the-art methods and can be incorporated into other motion forecasting approaches as general training schemes.

Self-Supervised Learning for Real-World Super-Resolution from Dual Zoomed Observations

This paper presents a novel self-supervised learning approach for real-world image SR from observations at dual camera zooms (SelfDZSR), and takes the telephoto image instead of an additional high-resolution image as the supervision information and selects a center patch from it as the reference to super-resolve the corresponding short-focus image patch.

References

SHOWING 1-10 OF 45 REFERENCES

Toward Real-World Single Image Super-Resolution: A New Benchmark and a New Model

This paper builds a real-world super-resolution (RealSR) dataset where paired LR-HR images on the same scene are captured by adjusting the focal length of a digital camera and presents a Laplacian pyramid based kernel prediction network (LP-KPN), which efficiently learns per-pixel kernels to recover the HR image.

Robust Reference-Based Super-Resolution With Similarity-Aware Deformable Convolution

A novel and efficient reference feature extraction module referred to as the Similarity Search and Extraction Network (SSEN) for reference-based super-resolution (RefSR) tasks, which is end-to-end trainable without any additional supervision or heavy computation, predicting the best match with a single network forward operation.

Asymmetric Wide Tele Camera Fusion for High Fidelity Digital Zoom

Novel techniques for both multi-camera image fusion as well as multi- camera transition addressing the aforementioned challenges to create a seamless user experience are presented.

Image Super-Resolution Using Deep Convolutional Networks

We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep

Deep Learning for Image Super-Resolution: A Survey

A survey on recent advances of image super-resolution techniques using deep learning approaches in a systematic way, which can roughly group the existing studies of SR techniques into three major categories: supervised SR, unsupervised SR, and domain-specific SR.

CrossNet: An End-to-end Reference-based Super Resolution Network using Cross-scale Warping

Using cross-scale warping, the CrossNet network is able to perform spatial alignment at pixel-level in an end-to-end fashion, which improves the existing schemes both in precision and efficiency.

Learning Texture Transformer Network for Image Super-Resolution

A novel Texture Transformer Network for Image Super-Resolution (TTSR), in which the LR and Ref images are formulated as queries and keys in a transformer, respectively, which achieves significant improvements over state-of-the-art approaches on both quantitative and qualitative evaluations.

Learning Cross-scale Correspondence and Patch-based Synthesis for Reference-based Super-Resolution

Experiments on MPI Sintel Dataset and Light-Field video dataset demonstrate the learned correspondence features outperform existing features, and the proposed RefSR-Net substantially outperforms conventional single image SR and exemplar-based SR approaches.

"Zero-Shot" Super-Resolution Using Deep Internal Learning

This paper exploits the internal recurrence of information inside a single image, and train a small image-specific CNN at test time, on examples extracted solely from the input image itself, which is the first unsupervised CNN-based SR method.

Feature Representation Matters: End-to-End Learning for Reference-Based Image Super-Resolution

This paper develops an end-to-end training framework for the referencebased super-resolution task, where the feature encoding network prior to matching and swapping is jointly trained with the image synthesis network.