• Corpus ID: 226226620

Two-layer clustering-based sparsifying transform learning for low-dose CT reconstruction

@article{Yang2020TwolayerCS,
  title={Two-layer clustering-based sparsifying transform learning for low-dose CT reconstruction},
  author={Xikai Yang and Yong Long and Saiprasad Ravishankar},
  journal={ArXiv},
  year={2020},
  volume={abs/2011.00428}
}
Achieving high-quality reconstructions from low-dose computed tomography (LDCT) measurements is of much importance in clinical settings. Model-based image reconstruction methods have been proven to be effective in removing artifacts in LDCT. In this work, we propose an approach to learn a rich two-layer clustering-based sparsifying transform model (MCST2), where image patches and their subsequent feature maps (filter residuals) are clustered into groups with different learned sparsifying… 

Figures from this paper

Model-based Reconstruction with Learning: From Unsupervised to Supervised and Beyond
TLDR
This review includes many recent methods based on unsupervised learning, and supervised learning, as well as a framework to combine multiple types of learned models together to recover high-quality images from limited or corrupted measurements.

References

SHOWING 1-10 OF 17 REFERENCES
Learned Multi-layer Residual Sparsifying Transform Model for Low-dose CT Reconstruction
TLDR
Experimental results on Mayo Clinic data show that the MRST model outperforms conventional methods such as FBP and PWLS methods based on edge-preserving (EP) regularizer and single-layer transform (ST) model, especially for maintaining some subtle details.
PWLS-ULTRA: An Efficient Clustering and Learning-Based Approach for Low-Dose 3D CT Image Reconstruction
TLDR
Simulations with 2-D and 3-D axial CT scans of the extended cardiac-torso phantom and3-D helical chest and abdomen scans show that the proposed PWLS with regularization based on a union of learned transforms leads to better image reconstructions than using a single learned square transform.
Two-Layer Residual Sparsifying Transform Learning for Image Reconstruction
TLDR
Pre-learning a two-layer extension of the transform model for image reconstruction, wherein the transform domain or filtering residuals of the image are further sparsified in the second layer, and the proposed block coordinate descent optimization algorithms involve highly efficient updates.
Multi-layer Residual Sparsifying Transform (MARS) Model for Low-dose CT Image Reconstruction
TLDR
Low-dose CT experimental results show that the MARS model outperforms conventional methods such as FBP and PWLS methods based on the edge-preserving (EP) regularizer and the single-layer learned transform (ST) model, especially in terms of reducing noise and maintaining some subtle details.
Learning Multi-Layer Transform Models
  • S. Ravishankar, B. Wohlberg
  • Computer Science
    2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
  • 2018
TLDR
This work considers multi-layer or nested extensions of the transform model, and proposes efficient learning algorithms for sparsifying transform learning, which enjoys a number of advantages.
Relaxed Linearized Algorithms for Faster X-Ray CT Image Reconstruction
TLDR
Experimental results with both simulated and real CT scan data show that the proposed relaxed algorithm (with ordered-subsets [OS] acceleration) is about twice as fast as the existing unrelaxed fast algorithms, with negligible computation and memory overhead.
Regularization Designs for Uniform Spatial Resolution and Noise Properties in Statistical Image Reconstruction for 3-D X-ray CT
TLDR
This paper extends the regularization method proposed in [1] to 3-D cone-beam CT by introducing a hypothetical scanning geometry that helps address the sampling properties and compared with the original method with both phantom simulation and clinical reconstruction in3-D axial X-ray CT.
$\ell_{0}$ Sparsifying Transform Learning With Efficient Optimal Updates and Convergence Guarantees
TLDR
This paper presents results illustrating the promising performance and significant speed-ups of transform learning over synthesis K-SVD in image denoising, and establishes that the alternating algorithms are globally convergent to the set of local minimizers of the nonconvex transform learning problems.
Learning Sparsifying Transforms
TLDR
This work proposes novel problem formulations for learning sparsifying transforms from data and proposes alternating minimization algorithms that give rise to well-conditioned square transforms that show the superiority of this approach over analytical sparsify transforms such as the DCT for signal and image representation.
Modeling Mixed Poisson-Gaussian Noise in Statistical Image Reconstruction for X-Ray CT
Statistical image reconstruction (SIR) methods for X-ray CT improve the ability to produce high-quality and accurate images, while greatly reducing patient exposure to radiation. The challenge with
...
1
2
...