• Corpus ID: 237385796

Variable Augmented Network for Invertible Modality Synthesis-Fusion

  title={Variable Augmented Network for Invertible Modality Synthesis-Fusion},
  author={Yuhao Wang and Ruirui Liu and Zihao Li and Cailian Yang and Qiegen Liu},
As an effective way to integrate the information contained in multiple medical images under different modalities, medical image synthesis and fusion have emerged in various clinical applications such as disease diagnosis and treatment planning. In this paper, an invertible and variable augmented network (iVAN) is proposed for medical image synthesis and fusion. In iVAN, the channel number of the network input and output is the same through variable augmentation technology, and data relevance is… 


Multimodal MR Synthesis via Modality-Invariant Latent Representation
A multi-input multi-output fully convolutional neural network model for MRI synthesis that avoids the need for curriculum learning by exploiting the fact that the various input modalities are highly correlated.
Hi-Net: Hybrid-Fusion Network for Multi-Modal MR Image Synthesis
A novel Hybrid-fusion Network (Hi-Net) is proposed for multi-modal MR image synthesis, which learns a mapping from multi- modal source images to target images, and effectively exploits the correlations among multiple modalities.
Cross-Modality Image Synthesis via Weakly Coupled and Geometry Co-Regularized Joint Dictionary Learning
This paper proposes a weakly coupled and geometry co-regularized joint dictionary learning method to address the problem of cross-modality synthesis while considering the fact that collecting the large amounts of training data is often impractical.
Bidirectional Mapping Generative Adversarial Networks for Brain MR to PET Synthesis
A bidirectional mapping mechanism is designed to embed the semantic information of PET images into the high dimensional latent space and the 3D DenseU-Net generator architecture and the extensive objective functions are further utilized to improve the visual quality of synthetic results.
Missing MRI Pulse Sequence Synthesis Using Multi-Modal Generative Adversarial Network
A variant of generative adversarial network capable of leveraging redundant information contained within multiple available sequences in order to generate one or more missing sequences for a patient scan is proposed and compared with competing unimodal and multi-modal methods.
Simultaneous Super-Resolution and Cross-Modality Synthesis of 3D Medical Images Using Weakly-Supervised Joint Convolutional Sparse Coding
This paper proposes the weakly-supervised joint convolutional sparse coding to simultaneously solve the problems of super-resolution (SR) and cross-modality image synthesis and shows that the proposed method outperforms state-of-the-art techniques on both SR reconstruction and simultaneous SR and cross/modality synthesis.
Cross-Domain Synthesis of Medical Images Using Efficient Location-Sensitive Deep Network
A novel architecture called location-sensitive deep network LSDN is proposed for synthesizing images across domains that integrates intensity feature from image voxels and spatial information in a principled manner and is computationally efficient, e.g. 26× faster than other sparse representation based methods.
Multimodal MR Image Synthesis Using Gradient Prior and Adversarial Learning
A novel end-to-end multisetting MR image synthesis method based on generative adversarial networks (GANs) - a deep learning model that can produce high quality synthesized images.
Image Synthesis in Multi-Contrast MRI With Conditional Generative Adversarial Networks
The proposed approach preserves intermediate-to-high frequency details via an adversarial loss, and it offers enhanced synthesis performance via pixel-wise and perceptual losses for registered multi-contrast images and a cycle-consistency loss for unregistered images.
A medical image fusion method based on convolutional neural networks
Experimental results demonstrate that the proposed convolutional neural networks method can achieve promising results in terms of both visual quality and objective assessment.