Dense Dual-Attention Network for Light Field Image Super-Resolution

  title={Dense Dual-Attention Network for Light Field Image Super-Resolution},
  author={Yu Mo and Yingqian Wang and Chao Xiao and Jungang Yang and Wei An},
Light field (LF) images can be used to improve the performance of image super-resolution (SR) because both angular and spatial information is available. It is challenging to incorporate distinctive information from different views for LF image SR. Moreover, the long-term information from the previous layers can be weakened as the depth of network increases. In this paper, we propose a dense dual-attention network for LF image SR. Specifically, we design a view attention module to adaptively… 


LFNet: A Novel Bidirectional Recurrent Convolutional Neural Network for Light-Field Image Super-Resolution
Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems.
Residual Networks for Light Field Image Super-Resolution
A learning-based method using residual convolutional networks is proposed to reconstruct light fields with higher spatial resolution and shows good performances in preserving the inherent epipolar property in light field images.
Light Field Spatial Super-Resolution via Deep Combinatorial Geometry Embedding and Structural Consistency Regularization
A novel learning-based LF spatial SR framework, in which each view of an LF image is first individually super-resolved by exploring the complementary information among views with combinatorial geometry embedding, which preserves more accurate parallax details, at a lower computation cost.
Learning a Deep Convolutional Network for Light-Field Image Super-Resolution
A novel method for Light-Field image super-resolution (SR) via a deep convolutional neural network using a datadriven learning method to simultaneously up-sample the angular resolution as well as the spatial resolution of a Light- field image.
Image Super-Resolution Using Very Deep Residual Channel Attention Networks
This work proposes a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections, and proposes a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels.
Light-Field Image Super-Resolution Using Convolutional Neural Network
This letter presents a novel method to simultaneously up-sample both the spatial and angular resolutions of a light field image via a deep convolutional neural network, and trains the whole network end-to-end.
Second-Order Attention Network for Single Image Super-Resolution
Experimental results demonstrate the superiority of the SAN network over state-of-the-art SISR methods in terms of both quantitative metrics and visual quality.
Channel-Wise and Spatial Feature Modulation Network for Single Image Super-Resolution
A channel-wise and spatial feature modulation (CSFM) network in which a series of feature modulation memory (FMM) modules are cascaded with a densely connected structure to transform shallow features to high informative features and maintain long-term information for image super-resolution.
High-Order Residual Network for Light Field Super-Resolution
Experimental results show that the proposed high-order residual network enables high-quality reconstruction even in challenging regions and outperforms state-of-the-art single image or LF reconstruction methods with both quantitative measurements and visual evaluation.
Residual Dense Network for Image Super-Resolution
This paper proposes residual dense block (RDB) to extract abundant local features via dense connected convolutional layers and uses global feature fusion in RDB to jointly and adaptively learn global hierarchical features in a holistic way.