CLIP-Driven Universal Model for Organ Segmentation and Tumor Detection

@article{Liu2023CLIPDrivenUM,
  title={CLIP-Driven Universal Model for Organ Segmentation and Tumor Detection},
  author={Jie Liu and Yixiao Zhang and Jieneng Chen and Junfei Xiao and Yongyi Lu and Bennett A. Landman and Yixuan Yuan and Alan Loddon Yuille and Yucheng Tang and Zongwei Zhou},
  journal={ArXiv},
  year={2023},
  volume={abs/2301.00785}
}
An increasing number of public datasets have shown a marked impact on automated organ segmentation and tumor detection. However, due to the small size and partially labeled problem of each dataset, as well as a limited investigation of diverse types of tumors, the resulting models are often limited to segmenting specific organs/tumors and ignore the semantics of anatomical structures, nor can they be extended to novel domains. To address these issues, we propose the CLIP-Driven Universal Model… 

COSST: Multi-organ Segmentation with Partially Labeled Datasets Using Comprehensive Supervisions and Self-training

This paper proposes a novel training framework termed COSST, which effectively and efficiently integrates comprehensive supervision signals with self-training and demonstrates consistent superior performance on various segmentation tasks and with different training data size.

Label-Free Liver Tumor Segmentation

It is demonstrated that AI models can accurately segment liver tumors without the need for manual annotation by using synthetic tumors in CT scans, which implies that manual efforts for annotating tumors voxel by voxels can be significantly reduced in the future.

MultiTalent: A Multi-Dataset Approach to Medical Image Segmentation

It is shown that MultiTalent also represents a powerful foundation model that offers a superior pre- training for various segmentation tasks compared to commonly used supervised or unsupervised pre-training baselines.

UniSeg: A Prompt-driven Universal Segmentation Model as well as A Strong Representation Learner

A prompt-driven Universal Segmentation model (UniSeg) for multi-task medical image segmentation using diverse modalities and domains is proposed, which outperforms other universal models and single-task models on 11 upstream tasks and beats other pre-trained models on two downstream datasets.

Annotating 8, 000 Abdominal CT Volumes for Multi-Organ Segmentation in Three Weeks

This paper proposes a systematic and efficient method to expedite the annotation process for organ segmentation and creates the largest multi-organ dataset (by far) with the spleen, liver, kidneys, stomach, gallbladder, pancreas, aorta, and IVC annotated in 8,448 CT volumes.

Transductive few-shot adapters for medical image segmentation

The comprehensive experiments on a collection of public CT datasets for organ segmentation reveal the limitations of standard fine-tuning methods in few-shot scenarios, point to the potential of vision adapters and transductive inference, and confirm the suitability of foundation models.

Zero-shot performance of the Segment Anything Model (SAM) in 2D medical imaging: A comprehensive evaluation and practical guidelines

The findings reveal that SAM's zero-shot performance is not only comparable to, but in certain cases, surpasses the current state-of-the-art, and practical guidelines are proposed that require minimal interaction while consistently yielding robust outcomes across all assessed contexts.

MONAI Label: A framework for AI-assisted Interactive Labeling of 3D Medical Images

MONAI Label, a free and open-source framework that facilitates the development of applications based on artificial intelligence (AI) models that aim at reducing the time required to annotate radiology datasets, is presented.

SQUID: Deep Feature In-Painting for Unsupervised Anomaly Detection

It is shown that SQUID can taxonomize the ingrained anatomical structures into recurrent patterns; and in the inference, it can identify anomalies in the image and surpasses 13 state-of-the-art methods in unsupervised anomaly detection.

Incremental Learning for Multi-organ Segmentation with Partially Labeled Datasets

This work gives the first attempt to conjecture that the different distribution is the key reason for 'catastrophic forgetting' that commonly exists in IL methods, and verify that IL has the natural adaptability to medical image scenarios.

DoDNet: Learning to Segment Multi-Organ and Tumors from Multiple Partially Labeled Datasets

A general 3D medical image segmentation model that has been pre-trained on a large-scale partially labeled dataset and can be extended (after fine-tuning) to downstream volumetric medical data segmentation tasks.

CT-ORG, a new dataset for multiple organ segmentation in computed tomography

This work developed a diverse dataset of 140 CT scans containing six organ classes: liver, lungs, bladder, kidney, bones and brain, and trained a deep neural network which requires only 4.3 s to simultaneously segment all the organs in a case.

Towards a Single Unified Model for Effective Detection, Segmentation, and Diagnosis of Eight Major Cancers Using a Large Collection of CT Scans

A unified multi-cancer image reading model (UniT) can significantly reduce the number of false positives produced by combined multi-system models, and moves one step closer towards a universal high-performance cancer screening tool.

Multi-organ Segmentation via Co-training Weight-averaged Models from Few-organ Datasets

This paper collaboratively train two networks and let the coupled networks teach each other on un-annotated organs, and co-train weight-averaged models for learning a unified multi-organ segmentation network from few-organ datasets to alleviate the noisy teaching supervisions.

AMOS: A Large-Scale Abdominal Multi-Organ Benchmark for Versatile Medical Image Segmentation

AMOS is presented, a large-scale, diverse, clinical dataset for abdominal organ segmentation, providing challenging examples and test-bed for studying robust segmentation algorithms under diverse targets and scenarios and benchmark several state-of-the-art medical segmentation models.

Universal Lesion Detection by Learning from Multiple Heterogeneously Labeled Datasets

This work learns a multi-head multi-task lesion detector using all datasets and generates lesion proposals on DeepLesion, and discovers suspicious but unannotated lesions using knowledge transfer from single-type lesion detectors.

Label-Free Liver Tumor Segmentation

It is demonstrated that AI models can accurately segment liver tumors without the need for manual annotation by using synthetic tumors in CT scans, which implies that manual efforts for annotating tumors voxel by voxels can be significantly reduced in the future.

Universal Lesion Detector : Deep Learning for Analysing Medical Scans

This work redesigns RetinaNet to be more applicable to medical imaging, using a general approach for optimising anchor configurations and by generating additional weak labels from the provided ground truth, and proposes approaches which are not limited to a particular dataset.

Marginal loss and exclusion loss for partially supervised multi-organ segmentation

...