Field of View Extension in Computed Tomography Using Deep Learning Prior

  title={Field of View Extension in Computed Tomography Using Deep Learning Prior},
  author={Yixing Huang and Lei Gao and Alexander Preuhs and A. Maier},
In computed tomography (CT), data truncation is a common problem. Images reconstructed by the standard filtered back-projection algorithm from truncated data suffer from cupping artifacts inside the field-of-view (FOV), while anatomical structures are severely distorted or missing outside the FOV. Deep learning, particularly the U-Net, has been applied to extend the FOV as a post-processing method. Since image-to-image prediction neglects the data fidelity to measured projection data, incorrect… Expand
Data Consistent CT Reconstruction from Insufficient Data with Learned Prior Images
This work investigates the robustness of deep learning in CT image reconstruction by showing false negative and false positive lesion cases, and proposes a data consistent reconstruction (DCR) method, which combines the advantages of compressed sensing and deep learning. Expand
CT local reconstruction method based on truncated data extrapolation network
Experimental results show that the proposed method based on truncated data extrapolation network can obviously suppress the ring artifacts and compared with images directly reconstructed using truncated projection data, the RMSE is reduced by an average of 43.185%, and the NMAD is reducing by 44.24%. Expand
Evaluation of novel AI‐based extended field‐of‐view CT reconstructions
A new deep learning based algorithm for extended field of view reconstruction and evaluates the accuracy of the eFoV reconstruction focusing on aspects relevant for radiotherapy shows that the novel deep learning approach produces images that look more realistic and have fewer artefacts. Expand
Fiducial marker recovery and detection from severely truncated data in navigation assisted spine surgery
Fiducial markers are commonly used in navigation assisted minimally invasive spine surgery (MISS) and they help transfer image coordinates into real world coordinates. In practice, these markersExpand


Learning to Reconstruct Computed Tomography Images Directly From Sinogram Data Under A Variety of Data Acquisition Conditions
Deep learning method with a common network architecture, termed iCT-Net, was developed and trained to accurately reconstruct images for previously solved and unsolved CT reconstruction problems with high quantitative accuracy, and accurate reconstructions were achieved for the case when the sparse view reconstruction problem is entangled with the classical interior tomographic problems. Expand
Data Consistent Artifact Reduction for Limited Angle Tomography with Deep Learning Prior
A data consistent artifact reduction (DCAR) method is introduced that achieves significant image quality improvement: for 120-degree cone-beam limited angle tomography more than 10% RMSE reduction in noise-free case and more than 24% RM SE reduction in noisy case compared with a state-of-the-art U-Net based method. Expand
Deep Learning Computed Tomography: Learning Projection-Domain Weights From Image Domain in Limited Angle Problems
A new type of cone-beam back-projection layer is proposed, efficiently calculating the forward pass of this layer, and it is shown that the learned algorithm can be interpreted using known concepts from cone beam reconstruction: the network is able to automatically learn strategies such as compensation weights and apodization windows. Expand
A novel reconstruction algorithm to extend the CT scan field-of-view.
A reconstruction algorithm is proposed that enables an adequate estimation of the projection outside the scan field-of-view (SFOV) and makes use of the fact that the total attenuation of each ideal projection in a parallel sampling geometry remains constant over views. Expand
One Network to Solve All ROIs: Deep Learning CT for Any ROI using Differentiated Backprojection
Experimental results show that the new type of neural networks significantly outperform existing iterative methods for all ROI sizes despite significantly lower runtime complexity, and can be used as a general CT reconstruction engine for many practical applications. Expand
Some Investigations on Robustness of Deep Learning in Limited Angle Tomography
This paper investigates whether some perturbations or noise will mislead a neural network to fail to detect an existing lesion, and demonstrates that the trained neural network, specifically the U-Net, is sensitive to Poisson noise. Expand
CT Field of View Extension Using Combined Channels Extension and Deep Learning Methods.
A method to extend the field of view of computed tomography images by extrapolating linearly the outer channels in the sinogram space and reducing artifacts due to the channels extension by a deep learning network in image space. Expand
LEARN: Learned Experts’ Assessment-Based Reconstruction Network for Sparse-Data CT
This paper unfolds the state-of-the-art “fields of experts”-based iterative reconstruction scheme up to a number of iterations for data-driven training, construct a learned experts’ assessment-based reconstruction network (LEARN) for sparse-data CT, and demonstrates the feasibility and merits of the LEARN network. Expand
Towards Clinical Application of a Laplace Operator-Based Region of Interest Reconstruction Algorithm in C-Arm CT
Two variants of the original approximated truncation robust algorithm for computed tomography (ATRACT) are presented, one is based on expressing the residual filter as an efficient 2-D convolution with an analytically derived kernel, and the second is to apply ATRACT in 1-D to further reduce computational complexity. Expand
Low-Dose CT Image Denoising Using a Generative Adversarial Network With Wasserstein Distance and Perceptual Loss
This paper introduces a new CT image denoising method based on the generative adversarial network (GAN) with Wasserstein distance and perceptual similarity that is capable of not only reducing the image noise level but also trying to keep the critical information at the same time. Expand