3D PET image generation with tumour masks using TGAN

@inproceedings{Bergen20213DPI,
  title={3D PET image generation with tumour masks using TGAN},
  author={Robert V Bergen and Jean-FranƧois Rajotte and Fereshteh Yousefirizi and Ivan S. Klyuzhin and Arman Rahmim and Raymond T. Ng},
  booktitle={Medical Imaging},
  year={2021}
}
Training computer-vision related algorithms on medical images for disease diagnosis or image segmentation is difficult due to the lack of training data, labeled samples, and privacy concerns. For this reason, a robust generative method to create synthetic data is highly sought after. However, most three-dimensional image generators require additional image input or are extremely memory intensive. To address these issues we propose adapting video generation techniques for 3- D image generation… 

Assessing Privacy Leakage in Synthetic 3-D PET Imaging using Transversal GAN

It is shown that the discriminator of the TrGAN is vulnerable to attack, and that an attacker can identify which samples were used in training with almost perfect accuracy, suggesting that TrGAN generators, but not discriminators, may be used for sharing synthetic 3-D PET data with minimal privacy risk while maintaining good utility and fidelity.

References

SHOWING 1-10 OF 25 REFERENCES

Medical Image Synthesis with Context-Aware Generative Adversarial Networks

A fully convolutional network is trained to generate CT given the MR image to better model the nonlinear mapping from MRI to CT and produce more realistic images, and an image-gradient-difference based loss function is proposed to alleviate the blurriness of the generated CT.

Squeeze-and-Excitation Normalization for Automated Delineation of Head and Neck Primary Tumors in Combined PET and CT Images

An automated approach for Head and Neck (H&N) primary tumor segmentation in combined positron emission tomography / computed tomography (PET/CT) images in the context of the MICCAI 2020 Head and neck Tumor segmentation challenge (HECKTOR).

Synthesis of Positron Emission Tomography (PET) Images via Multi-channel Generative Adversarial Networks (GANs)

Results on 50 lung cancer PET-CT studies indicate that the proposed multi-channel generative adversarial networks (M-GAN) based PET image synthesis method was much closer to the real PET images when compared with the existing methods.

Molecular Imaging, Reconstruction and Analysis of Moving Body Organs, and Stroke Imaging and Treatment

This work proposes a new approach which combines a multi-atlas segmentation of the CT with CRFs (Conditional Random Fields) segmentation method in PET, which is tested on ten patients and shows the best performance of the method.

Medical Image Synthesis via Deep Learning.

This chapter will focus on introducing typical CNNs and GANs models for medical image synthesis, and elaborate the recent work about low-dose to high-dose PET image synthesisation, and cross-modality MR images synthesis, using these models.

Medical Image Synthesis for Data Augmentation and Anonymization using Generative Adversarial Networks

This work proposes a method to generate synthetic abnormal MRI images with brain tumors by training a generative adversarial network using two publicly available data sets of brain MRI, and demonstrates the value of generative models as an anonymization tool.

Challenges and Promises of PET Radiomics

Multi-Institutional Deep Learning Modeling Without Sharing Patient Data: A Feasibility Study on Brain Tumor Segmentation

This study introduces the first use of federated learning for multi-institutional collaboration, enabling deep learning modeling without sharing patient data, and demonstrates that the performance of Federated semantic segmentation models on multimodal brain scans is similar to that of models trained by sharing data.

Deep Learning for Medical Image Analysis

Different novel methods based on deep learning for brain abnormality detection, recognition, and segmentation for analyzing medical images using deep learning algorithm are explored.