DIDFuse: Deep Image Decomposition for Infrared and Visible Image Fusion

@article{Zhao2020DIDFuseDI,
  title={DIDFuse: Deep Image Decomposition for Infrared and Visible Image Fusion},
  author={Zixiang Zhao and Shuang Xu and Chunxia Zhang and Junmin Liu and Pengfei Li and Jiangshe Zhang},
  journal={ArXiv},
  year={2020},
  volume={abs/2003.09210}
}
Infrared and visible image fusion, a hot topic in the field of image processing, aims at obtaining fused images keeping the advantages of source images. This paper proposes a novel auto-encoder (AE) based fusion network. The core idea is that the encoder decomposes an image into background and detail feature maps with low- and high-frequency information, respectively, and that the decoder recovers the original image. To this end, the loss function makes the background/detail feature maps of… 

Figures and Tables from this paper

DRSNFuse: Deep Residual Shrinkage Network for Infrared and Visible Image Fusion
TLDR
DRSNFuse trained with the proposed loss function can generate fused images with fewer artifacts and more original textures, which also satisfy the human visual system and has better fusion results than mainstream methods through quantitative comparison.
A Multi-Stage Visible and Infrared Image Fusion Network Based on Attention Mechanism
TLDR
This paper proposes a multi-stage visible and infrared image fusion network based on an attention mechanism (MSFAM), which stabilizes the training process throughMulti-stage training and enhances features by the learning attention fusion block to improve the network effect.
Unsupervised Misaligned Infrared and Visible Image Fusion via Cross-Modality Image Generation and Registration
TLDR
A robust cross-modality generation-registration paradigm for unsupervised misaligned infrared and visible image fusion (IVIF) and introduces a Multi-level Refinement Registration Network (MRRN) to predict the displacement vector field between distorted and pseudo infrared images and reconstruct registered infrared image under the mono- modality setting.
Target-aware Dual Adversarial Learning and a Multi-scenario Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection
TLDR
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
A Unified Multi-Task Learning Framework of Real-Time Drone Supervision for Crowd Counting
In this paper, a novel Unified Multi-Task Learning Framework of Real-Time Drone Supervision for Crowd Counting (MFCC) is proposed, which utilizes an image fusion network architecture to fuse images
Searching a Hierarchically Aggregated Fusion Architecture for Fast Multi-Modality Image Fusion
TLDR
A hierarchically aggregated fusion architecture is constructed to extract and refine fused features from feature-level and object-level fusion perspectives, which is responsible for obtaining complementary target/detail representations and can obtain a task-specific architecture with fast inference time.
Total Variation Constrained Graph-Regularized Convex Non-Negative Matrix Factorization for Data Representation
TLDR
The results of clustering experiments on multiple datasets show the effectiveness and robustness of the proposed method compared to state-of-the-art clustering methods and other related work.
...
...

References

SHOWING 1-10 OF 44 REFERENCES
DenseFuse: A Fusion Approach to Infrared and Visible Images
TLDR
A novel deep learning architecture for infrared and visible images fusion problems is presented, where the encoding network is combined with convolutional layers, a fusion layer, and dense block in which the output of each layer is connected to every other layer.
2018 24th International Conference on Pattern Recognition (ICPR)
  • Computer Science
  • 2018
Fast and Efficient Zero-Learning Image Fusion
TLDR
A real-time image fusion method using pre-trained neural networks that generates a single image containing features from multiple sources that achieves state-of-the-art performance in visual quality, objective assessment, and runtime efficiency.
海外文献紹介 Optical Engineering
In CVPR 2011
  • pages 177–184. IEEE,
  • 2011
IEEE Transactions on Multimedia
sTephen p. Welby, Executive Director & Chief Operating Officer Thomas siegerT, Business Administration Julie eve cozin, Corporate Governance donna hourican, Corporate Strategy Jamie moesch,
Infrared Physics & Technology
abstract Near-infrared emitting phosphors LaOCl:Nd 3+ /Yb 3+ were prepared by the solid-state method, and theirstructures and luminescent properties were investigated by using X-ray diffraction and
...
...