High Quality Remote Sensing Image Super-Resolution Using Deep Memory Connected Network

  title={High Quality Remote Sensing Image Super-Resolution Using Deep Memory Connected Network},
  author={Wenjia Xu and Guangluan Xu and Yang Wang and Xian Sun and Daoyu Lin and Yirong Wu},
  journal={IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium},
  • Wenjia Xu, Guangluan Xu, Yirong Wu
  • Published 1 July 2018
  • Environmental Science, Computer Science
  • IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium
Single image super-resolution is an effective way to enhance the spatial resolution of remote sensing image, which is crucial for many applications such as target detection and image classification. However, existing methods based on the neural network usually have small receptive fields and ignore the image detail. We propose a novel method named deep memory connected network (DMCN) based on a convolutional neural network to reconstruct high-quality super-resolution images. We build local and… 

Figures and Tables from this paper

Parallel-Connected Residual Channel Attention Network for Remote Sensing Image Super-Resolution
This paper proposes a novel CNN called a parallel-connected residual channel attention network (PCRCAN), inspired by group convolution, and proposes a parallel module with feature aggregation modules in PCRCAN that significantly reduces the model parameters and fully integrates feature maps by widening the network architecture.
Super-resolution of remotely sensed data using channel attention based deep learning approach
This research proposed a channel attention-based framework for Remote Sensing Image Super-resolution (CARS) by constructing a novel residual channel attention block (RCAB) to further extract the features and adopted a post-upsampling architecture to reduce the computational complexity and time cost.
Remote Sensing Image Super-Resolution Using Novel Dense-Sampling Networks
A dense-sampling super-resolution network (DSSR), which reuses an upscaler to upsample multiple low-dimension features, and a wide feature attention block (WAB), which incorporates the wide activation and attention mechanism, is introduced to enhance the representation ability of the network.
Remote Sensing Imagery Super Resolution Based on Adaptive Multi-Scale Feature Fusion Network
An adaptive multi-scale feature fusion network (AMFFN) for remote sensing image super-resolution is proposed, and the results show that the method outperforms the classic methods, such as Super-Resolution Convolutional Neural Network (SRCNN), Efficient Sub-Pixel convolutional Network (ESPCN), and multi- scale residual CNN (MSRN).
Remote Sensing Image Super-Resolution Using Second-Order Multi-Scale Networks
A single-path feature reuse which cleverly captures multi-scale feature information through aggregating the features learned at different depths of a single path is proposed, resulting in a lightweight and high-performance super-resolution network.
Image super-resolution via deep residual network
An image super-resolution method based on deep residual network based on parametric rectified linear unit is used as the activation function, and the Adam optimization method is used to further improve the reconstruction effect.
Convolutional Neural Network Modelling for MODIS Land Surface Temperature Super-Resolution
A deep learning-based algorithm, named Multi-residual U-Net, is introduced, which aims at super-resolving the input LST image from 1Km to 250m per pixel, which outperforms other state-of-the-art methods.
Super-resolution decision-making tool using deep convolution neural networks for panchromatic images
A Deep Convolution Neural Network (CNN) based Super-Resolution (SR) decision-making tool is proposed for the raw panchromatic satellite image and works well with the dataset considered counterfeiting other techniques.
Survey of Deep-Learning Approaches for Remote Sensing Observation Enhancement
This paper provides a comprehensive review of deep-learning methods for the enhancement of remote sensing observations, focusing on critical tasks including single and multi-band super-resolution, denoising, restoration, pan-sharpening, and fusion, among others.
Image Enhancement and Improvement Algorithm Based on Esrgan Singal Frame Remote Sensing Image
This method improves the feature comprehensiveness by increasing the network fineness degree, and USES the modified perception loss to get the brightness closer to the real image, which is beneficial to improve the quality of single frame remote sensing image.


Super-Resolution for Remote Sensing Images via Local–Global Combined Network
This letter proposes a new single-image super-resolution algorithm named local–global combined networks (LGCNet) for remote sensing images based on the deep CNNs, elaborately designed with its “multifork” structure to learn multilevel representations ofRemote sensing images including both local details and global environmental priors.
Abstract. In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to
Accurate Image Super-Resolution Using Very Deep Convolutional Networks
This work presents a highly accurate single-image superresolution (SR) method using a very deep convolutional network inspired by VGG-net used for ImageNet classification and uses extremely high learning rates enabled by adjustable gradient clipping.
Learning a Deep Convolutional Network for Image Super-Resolution
This work proposes a deep learning method for single image super-resolution (SR) that directly learns an end-to-end mapping between the low/high-resolution images and shows that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network.
Remote Sensing Image Scene Classification: Benchmark and State of the Art
A large-scale data set, termed “NWPU-RESISC45,” is proposed, which is a publicly available benchmark for REmote Sensing Image Scene Classification (RESISC), created by Northwestern Polytechnical University (NWPU).
Bag-of-visual-words and spatial extensions for land-use classification
This work considers a standard non-spatial representation in which the frequencies but not the locations of quantized image features are used to discriminate between classes analogous to how words are used for text document classification without regard to their order of occurrence, and considers two spatial extensions.
SSIM) Bicubic (26.30, 0.4970) SRCNN (26.52, 0.5252) vdsr (27.29, 0.5549) MBSR (27.52,0.5858) (b) dataset: NWPU-RESISC45 image