• Corpus ID: 247011575

Deep multi-modal aggregation network for MR image reconstruction with auxiliary modality

@inproceedings{Feng2021DeepMA,
  title={Deep multi-modal aggregation network for MR image reconstruction with auxiliary modality},
  author={Chun-Mei Feng and H. Fu and Tianfei Zhou and Yong Xu and Ling Shao and David Zhang},
  year={2021}
}
Shenzhen Key Laboratory of Visual Object Detection and Recognition, Harbin Institute of Technology (Shenzhen), 518055, China. National Center for Artificial Intelligence (NCAI), SDAIA, KSA. School of Science and Engineering, The Chinese University of Hong Kong (Shenzhen), Shenzhen 518172, China. Shenzhen Research Institute of Big Data, Shenzhen 518172, China. Shenzhen Institute of Artificial Intelligence and Robotics for Society, Shenzhen 518172, China. Institute of High Performance Computing… 

References

SHOWING 1-10 OF 45 REFERENCES

Deep-Learning-Based Multi-Modal Fusion for Fast MR Reconstruction

The results have shown that Dense-Unet can reconstruct a three-dimensional T2WI volume in less than 10 s with an under-sampling rate of 8 for the k-space and negligible aliasing artifacts or signal-noise-ratio loss.

Residual Attention Network for Image Classification

The proposed Residual Attention Network is a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion and can be easily scaled up to hundreds of layers.

Multimodal Intelligence: Representation Learning, Information Fusion, and Applications

A technical review of available models and learning methods for multimodal intelligence, focusing on the combination of vision and natural language modalities, which has become an important topic in both the computer vision andnatural language processing research communities.

Brain MR to PET Synthesis via Bidirectional Generative Adversarial Network

A novel end-to-end network, called Bidirectional GAN, where image contexts and latent vector are effectively used and jointly optimized for brain MR- to-PET synthesis, and a bidirectional mapping mechanism is designed to embed the diverse brain structural details into the high-dimensional latent space.

CBAM: Convolutional Block Attention Module

The proposed Convolutional Block Attention Module (CBAM), a simple yet effective attention module for feed-forward convolutional neural networks, can be integrated into any CNN architectures seamlessly with negligible overheads and is end-to-end trainable along with base CNNs.

MoDL: Model-Based Deep Learning Architecture for Inverse Problems

This work introduces a model-based image reconstruction framework with a convolution neural network (CNN)-based regularization prior, and proposes to enforce data-consistency by using numerical optimization blocks, such as conjugate gradients algorithm within the network.

Recalibrating Fully Convolutional Networks With Spatial and Channel “Squeeze and Excitation” Blocks

This paper effectively incorporate the recently proposed “squeeze and excitation” (SE) modules for channel recalibration for image classification in three state-of-the-art F-CNNs and demonstrates a consistent improvement of segmentation accuracy on three challenging benchmark datasets.

DuDoRNet: Learning a Dual-Domain Recurrent Network for Fast MRI Reconstruction With Deep T1 Prior

  • Bo ZhouS. K. Zhou
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
A Dual Domain Recurrent Network (DuDoRNet) with deep T1 prior embedded to simultaneously recover k-space and images for accelerating the acquisition of MRI with a long imaging protocol and is customized for dual domain restorations from undersampled MRI data.

Visualizing and Understanding Convolutional Networks

A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark.