Generalized Organ Segmentation by Imitating One-shot Reasoning using Anatomical Correlation

  title={Generalized Organ Segmentation by Imitating One-shot Reasoning using Anatomical Correlation},
  author={Hong-Yu Zhou and Hualuo Liu and Shilei Cao and Dong Wei and Chi-Ken Lu and Yizhou Yu and Kai Ma and Yefeng Zheng},
Learning by imitation is one of the most significant abilities of human beings and plays a vital role in human’s computational neural system. In medical image analysis, given several exemplars (anchors), experienced radiologist has the ability to delineate unfamiliar organs by imitating the reasoning process learned from existing types of organs. Inspired by this observation, we propose OrganNet which learns a generalized organ concept from a set of annotated organ classes and then transfer… 
Exemplar Learning for Medical Image Segmentation
An Exemplar Learning-based Synthesis Net (ELSNet) framework for medical image segmentation that enables innovative exemplar-based data synthesis, pixel-prototype based contrastive embedding learning, and pseudo-label based exploitation of the unlabeled data is proposed.
Preservational Learning Improves Self-supervised Medical Image Models by Reconstructing Diverse Contexts
This work introduces Preservational Learning to reconstruct diverse image contexts in order to preserve more information in learned representations and provides very competitive results under the pretraining-finetuning protocol, outperforming both self-supervised and supervised counterparts in 5 classification/segmentation tasks substantially.


Learning to Segment Anatomical Structures Accurately from One Exemplar
This work proposes a novel contribution of Contour Transformer Network (CTN), a one-shot anatomy segmentor including a naturally built-in human-in-the-loop mechanism that significantly outperforms non-learning-based methods and performs competitively to the state-of- the-art fully supervised deep learning approaches.
Multiorgan segmentation using distance-aware adversarial networks
This work proposes a framework to perform the automatic segmentation of multiple OAR: esophagus, heart, trachea, and aorta, based on an original distance map that yields not only the localization of each organ, but also the spatial relationship between them.
LT-Net: Label Transfer by Learning Reversible Voxel-Wise Correspondence for One-Shot Medical Image Segmentation
A one-shot segmentation method to alleviate the burden of manual annotation for medical images by resorting to the forward-backward consistency, which is widely used in correspondence problems, and additionally learns the backward correspondences from the warped atlases back to the original atlas.
Spatial Warping Network for 3D Segmentation of the Hippocampus in MR Images
A convolutional neural network is proposed for structural segmentation based on deformation of an example mask that is disease-state agnostic, which is applied to the hippocampus and outperforms other segmentation methods and is consistent across disease states, independent of the degree of disease-related atrophy.
Data Augmentation Using Learned Transformations for One-Shot Medical Image Segmentation
This work learns a model of transformations from the images, and uses the model along with the labeled example to synthesize additional labeled examples, enabling the synthesis of complex effects such as variations in anatomy and image acquisition procedures.
CompareNet: Anatomical Segmentation Network with Deep Non-local Label Fusion
This work proposes a novel deep framework for label propagation based on non-local label fusion, named CompareNet, which incorporates subnets for both extracting discriminating features, and learning the similarity measure, which lead to accurate segmentation.
Automatic Multi-Organ Segmentation on Abdominal CT With Dense V-Networks
It is concluded that the deep-learning-based segmentation represents a registration-free method for multi-organ abdominal CT segmentation whose accuracy can surpass current methods, potentially supporting image-guided navigation in gastrointestinal endoscopy procedures.
Joint Deep Learning of Foreground, Background and Shape for Robust Contextual Segmentation
This work proposes a hybrid of generative modeling of image formation to jointly learn the triad of foreground (F), background (B) and shape (S), which would carry the advantages of FCN in capturing contexts.
'Squeeze & Excite' Guided Few-Shot Segmentation of Volumetric Images
3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation
The proposed network extends the previous u-net architecture from Ronneberger et al. by replacing all 2D operations with their 3D counterparts and performs on-the-fly elastic deformations for efficient data augmentation during training.