• Corpus ID: 237372156

An Integrated Framework for the Heterogeneous Spatio-Spectral-Temporal Fusion of Remote Sensing Images

  title={An Integrated Framework for the Heterogeneous Spatio-Spectral-Temporal Fusion of Remote Sensing Images},
  author={Menghui Jiang and Huanfeng Shen and Jie Li and Liang-pei Zhang},
  • Menghui Jiang, Huanfeng Shen, +1 author Liang-pei Zhang
  • Published 1 September 2021
  • Computer Science, Engineering
  • ArXiv
Image fusion technology is widely used to fuse the complementary information between multi-source remote sensing images. Inspired by the frontier of deep learning, this paper first proposes a heterogeneous-integrated framework based on a novel deep residual cycle GAN. The proposed network consists of a forward fusion part and a backward degeneration feedback part. The forward part generates the desired fusion result from the various observations; the backward degeneration feedback part… 


An Integrated Framework for the Spatio–Temporal–Spectral Fusion of Remote Sensing Images
The proposed integrated fusion framework can achieve the integrated fusion of multisource observations to obtain high spatio-temporal-spectral resolution images, without limitations on the number of remote sensing sensors.
An Integrated Spatio-Spectral–Temporal Sparse Representation Method for Fusing Remote-Sensing Images With Different Resolutions
The integrated spatio-spectral–temporal sparse representation model based on the learned spectral–spatial and temporal change features strengthens the model’s ability to provide high-resolution data needed to address demanding work in real-world applications.
Data fusion techniques have been widely researched and applied in remote sensing field. In this paper, an integrated fusion method for remotely sensed images is presented. Differently from the
Spatiotemporal Fusion With Only Two Remote Sensing Images as Input
The proposed method extends the application scenarios of spatiotemporal fusion, and creates opportunities to fuse sensors with barely overlapping temporal coverages, such as the Landsat 8 Operational Land Imager and the Sentinel-2 MultiSpectral Instrument.
A Bayesian Data Fusion Approach to Spatio-Temporal Fusion of Remotely Sensed Images
This work proposes a Bayesian data fusion approach that incorporates the temporal correlation information in the image time series and casts the fusion problem as an estimation problem in which the fused image is obtained by the Maximum A Posterior (MAP) estimator.
Spatio-temporal fusion for remote sensing data: an overview and new benchmark
This review provides (for the first time in the literature) a robust benchmark STF dataset that includes three important characteristics: diversity of regions, long timespan, and (3) challenging scenarios.
A Spatiotemporal Fusion Based Cloud Removal Method for Remote Sensing Images With Land Cover Changes
A cloud removal procedure based on multisource data fusion that will act as an important technical supplement to the current cloud removal framework, and it provides the possibility to handle scenes with significant land cover changes.
Spatial–Spectral Fusion by Combining Deep Learning and Variational Model
Both quantitative and visual assessments on high-quality images from various sources demonstrate that the proposed fusion method is superior to all the mainstream algorithms included in the comparison, in terms of overall fusion accuracy.
Missing Data Reconstruction in Remote Sensing Image With a Unified Spatial–Temporal–Spectral Deep Convolutional Neural Network
The proposed model employs a unified deep CNN combined with spatial–temporal–spectral supplementary information to solve three typical missing information reconstruction tasks: 1) dead lines in Aqua Moderate Resolution Imaging Spectroradiometer band 6; 2) the Landsat Enhanced Thematic Mapper Plus scan line corrector-off problem; and 3) thick cloud removal.
A new sensor bias-driven spatio-temporal fusion model based on convolutional neural networks
A new sensor bias-driven STF model (called BiaSTF) is introduced to mitigate the differences between the spectral and spatial distortions presented in traditional methods, and a new learning method based on convolutional neural networks (CNNs) to efficiently obtain this bias.