• Corpus ID: 244102718

Fast T2w/FLAIR MRI Acquisition by Optimal Sampling of Information Complementary to Pre-acquired T1w MRI

@article{Yang2021FastTM,
  title={Fast T2w/FLAIR MRI Acquisition by Optimal Sampling of Information Complementary to Pre-acquired T1w MRI},
  author={Junwei Yang and Xiao-Xin Li and Feihong Liu and Dong Nie and Pietro Lio' and Haikun Qi and Dinggang Shen},
  journal={ArXiv},
  year={2021},
  volume={abs/2111.06400}
}
Recent studies on T1-assisted MRI reconstruction for under-sampled images of other modalities have demonstrated the potential of further accelerating MRI acquisition of other modalities. Most of the state-ofthe-art approaches have achieved improvement through the development of network architectures for fixed undersampling patterns, without fully exploiting the complementary information between modalities. Although existing under-sampling pattern learning algorithms can be simply modified to… 

Figures and Tables from this paper

DS3-Net: Difficulty-perceived Common-to-T1ce Semi-Supervised Multimodal MRI Synthesis Network

TLDR
A Difficulty-perceived common-to-T1ce Semi-Supervised multimodal MRI Synthesis network (DS-Net), involving both paired and unpaired data together with dual-level knowledge distillation, that outperforms its supervised counterpart in each respect.

Joint optimization of Cartesian sampling patterns and reconstruction for single-contrast and multi-contrast fast magnetic resonance imaging.

  • Jiechao WangQinqin YangQizhi YangLina XuCongbo CaiS. Cai
  • Computer Science
    Computer methods and programs in biomedicine
  • 2022

Cross-Modality High-Frequency Transformer for MR Image Super-Resolution

TLDR
An early effort to build a Transformer-based MR image super-resolution framework, with careful designs on exploring valuable domain prior knowledge, and establishes a novel Transformer architecture, called Cross-modality high-frequency Transformer (Cohf-T), to introduce such priors into super-resolving the low-resolution MR images.

References

SHOWING 1-10 OF 42 REFERENCES

Deep-Learning-Based Multi-Modal Fusion for Fast MR Reconstruction

TLDR
The results have shown that Dense-Unet can reconstruct a three-dimensional T2WI volume in less than 10 s with an under-sampling rate of 8 for the k-space and negligible aliasing artifacts or signal-noise-ratio loss.

Ultra-Fast T2-Weighted MR Reconstruction Using Complementary T1-Weighted Information

TLDR
The results have shown that Dense-Unet can reconstruct a 3D T2WI volume in less than 10 s, i.e., with the acceleration rate as high as 8 or more but with negligible aliasing artefacts and signal-noise-ratio (SNR) loss.

DuDoRNet: Learning a Dual-Domain Recurrent Network for Fast MRI Reconstruction With Deep T1 Prior

  • Bo ZhouS. K. Zhou
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
A Dual Domain Recurrent Network (DuDoRNet) with deep T1 prior embedded to simultaneously recover k-space and images for accelerating the acquisition of MRI with a long imaging protocol and is customized for dual domain restorations from undersampled MRI data.

Learning-based Optimization of the Under-sampling Pattern in MRI

TLDR
The proposed method, which the authors call LOUPE (Learning-based Optimization of the Under-sampling PattErn), was implemented by modifying a U-Net, a widely-used convolutional neural network architecture, that is append with the forward model that encodes the under-sampled process.

Enhanced Deep-Learning-Based Magnetic Resonance Image Reconstruction by Leveraging Prior Subject-Specific Brain Imaging: Proof-of-Concept Using a Cohort of Presumed Normal Subjects

TLDR
A flexible three-step method that can use prior scan information to further accelerate MR examinations and have better volume agreement with the fully sampled reference images compared to the non-enhanced images is proposed.

Prior-Guided Image Reconstruction for Accelerated Multi-Contrast MRI via Generative Adversarial Networks

TLDR
A new approach for synergistic recovery of undersampled multi-contrast acquisitions based on conditional generative adversarial networks is proposed, which mitigates the limitations of pure learning-based reconstruction or synthesis by utilizing three priors: shared high-frequency prior available in the source contrast to preserve high-spatial-frequency details, low-frequencyPrior available inThe undersampling target contrast to prevent feature leakage/loss, and perceptual prior to improve recovery of high-level features.

Multi-Modal MRI Reconstruction with Spatial Alignment Network

TLDR
The spatial alignment network estimates the spatial misalignment between the fully-sampled reference and the undersampled target images, and warps the reference image accordingly, to improve the quality of the reconstructed target modality.

Deep-Learning-Based Optimization of the Under-Sampling Pattern in MRI

TLDR
This article demonstrates that LOUPE-optimized under-sampling masks are data-dependent, varying significantly with the imaged anatomy, and perform well with different reconstruction methods, and presents empirical results obtained with a large-scale, publicly available knee MRI dataset, where LouPE offered superior reconstruction quality across different conditions.

A Deep Cascade of Convolutional Neural Networks for Dynamic MR Image Reconstruction

TLDR
A framework for reconstructing dynamic sequences of 2-D cardiac magnetic resonance images from undersampled data using a deep cascade of convolutional neural networks (CNNs) to accelerate the data acquisition process is proposed and it is demonstrated that CNNs can learn spatio-temporal correlations efficiently by combining convolution and data sharing approaches.

Multicontrast MRI Reconstruction with Structure-Guided Total Variation

TLDR
Two modifications of total variation are discussed that take structural a priori knowledge into account and reduce to total variation in the degenerate case when no structural knowledge is available and exploiting the two dimensional directional information results in images with well defined edges.