3D Segmentation Learning From Sparse Annotations and Hierarchical Descriptors

@article{Yin20213DSL,
  title={3D Segmentation Learning From Sparse Annotations and Hierarchical Descriptors},
  author={Peng Yin and Lingyun Xu and Jianmin Ji and Sebastian A. Scherer and Howie Choset},
  journal={IEEE Robotics and Automation Letters},
  year={2021},
  volume={6},
  pages={5953-5960}
}
One of the main obstacles to 3D semantic segmentation is the significant amount of endeavor required to generate expensive point-wise annotations for fully supervised training. To alleviate manual efforts, we propose GIDSeg, a novel approach that can simultaneously learn segmentation from sparse annotations via reasoning global-regional structures and individual-vicinal properties. GIDSeg depicts global- and individual- relation via a dynamic edge convolution network coupled with a kernelized… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 31 REFERENCES

3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation

TLDR
The proposed network extends the previous u-net architecture from Ronneberger et al. by replacing all 2D operations with their 3D counterparts and performs on-the-fly elastic deformations for efficient data augmentation during training.

SEGCloud: Semantic Segmentation of 3D Point Clouds

TLDR
SEGCloud is presented, an end-to-end framework to obtain 3D point-level segmentation that combines the advantages of NNs, trilinear interpolation(TI) and fully connected Conditional Random Fields (FC-CRF).

Revisiting Dilated Convolution: A Simple Approach for Weakly- and Semi-Supervised Semantic Segmentation

TLDR
It is found that varying dilation rates can effectively enlarge the receptive fields of convolutional kernels and more importantly transfer the surrounding discriminative information to non-discriminative object regions, promoting the emergence of these regions in the object localization maps.

SqueezeSegV2: Improved Model Structure and Unsupervised Domain Adaptation for Road-Object Segmentation from a LiDAR Point Cloud

TLDR
This work introduces a new model SqueezeSegV2, which is more robust against dropout noises in LiDAR point cloud and therefore achieves significant accuracy improvement, and a domain-adaptation training pipeline consisting of three major components: learned intensity rendering, geodesic correlation alignment, and progressive domain calibration.

Efficient Scene Labeling via Sparse Annotations

TLDR
A novel constrained clustering which is composed of two steps: overclustering deep features into raw clusters with high selfconsistence and introducing sparse annotations as semantic constrains to merge raw clusters into scene clusters.

PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation

TLDR
This paper designs a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.

Weakly Supervised Semantic Point Cloud Segmentation: Towards 10× Fewer Labels

  • Xun XuGim Hee Lee
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
This work proposes a weakly supervised point cloud segmentation approach which requires only a tiny fraction of points to be labelled in the training stage, made possible by learning gradient approximation and exploitation of additional spatial and color smoothness constraints.

Weakly-and Semi-Supervised Learning of a Deep Convolutional Network for Semantic Image Segmentation

TLDR
Expectation-Maximization (EM) methods for semantic image segmentation model training under weakly supervised and semi-supervised settings are developed and extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentsation benchmark, while requiring significantly less annotation effort.

Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR Segmentation

  • Xinge ZhuHui Zhou Dahua Lin
  • Computer Science
    2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2021
TLDR
A new framework for the outdoor LiDAR segmentation is proposed, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern while maintaining these inherent properties.

SqueezeSeg: Convolutional Neural Nets with Recurrent CRF for Real-Time Road-Object Segmentation from 3D LiDAR Point Cloud

TLDR
An end-to-end pipeline called SqueezeSeg based on convolutional neural networks (CNN), which takes a transformed LiDAR point cloud as input and directly outputs a point-wise label map, which is then refined by a conditional random field (CRF) implemented as a recurrent layer.