Corpus ID: 91183970

A Dataset for Semantic Segmentation of Point Cloud Sequences

@article{Behley2019ADF,
  title={A Dataset for Semantic Segmentation of Point Cloud Sequences},
  author={Jens Behley and Martin Garbade and Andres Milioto and Jan Quenzel and Sven Behnke and C. Stachniss and Juergen Gall},
  journal={ArXiv},
  year={2019},
  volume={abs/1904.01416}
}
Semantic scene understanding is important for various applications. [...] Key Method We annotated all sequences of the KITTI Vision Odometry Benchmark and provide dense point-wise annotations for the complete $360^{o}$ field-of-view of the employed automotive LiDAR.Expand
MNEW: Multi-domain Neighborhood Embedding and Weighting for Sparse Point Clouds Segmentation
TLDR
A new method called MNEW is proposed, including multi-domain neighborhood embedding, and attention weighting based on their geometry distance, feature similarity, and neighborhood sparsity, which achieves the top performance for sparse point clouds, which is important to the application of LiDAR-based automated driving perception. Expand
Few-Shot Point Cloud Region Annotation with Human in the Loop
TLDR
It is shown that the proposed framework significantly reduces the amount of human interaction needed in annotating point clouds, without sacrificing on the quality of the annotations. Expand
GIDSeg: Learning 3D Segmentation from Sparse Annotations via Hierarchical Descriptors
  • Ellen Yi-Ge, Yilong Zhu
  • 2020 2nd International Conference on Information Technology and Computer Application (ITCA)
  • 2020
One of the main obstacles to 3D semantic segmentation is the significant amount of endeavor required to generate expensive point-wise annotations for fully supervised training. To alleviate manualExpand
Shooting Labels by Virtual Reality
As availability of a few, large-size, annotated datasets, like ImageNet, Pascal VOC and COCO, spawned the deep learning revolution that has disrupted computer vision research so dramatically, weExpand
Driving Datasets Literature Review
TLDR
This report is a survey of the different autonomous driving datasets which have been published up to date and describes the diverse driving tasks explored by the datasets. Expand
3D Segmentation Learning From Sparse Annotations and Hierarchical Descriptors
TLDR
GIDSeg, a novel approach that can simultaneously learn segmentation from sparse annotations via reasoning global-regional structures and individual-vicinal properties, achieves superior performance over state-of-the-art for inferencing 3D dense segmentation with only sparse annotations. Expand
Shooting Labels: 3D Semantic Labeling by Virtual Reality
TLDR
This work proposes Shooting Labels, the first 3D labeling tool for dense 3D semantic segmentation which exploits Virtual Reality to render the labeling task as easy and fun as playing a video-game. Expand

References

SHOWING 1-10 OF 65 REFERENCES
Sensor fusion for semantic segmentation of urban scenes
TLDR
A semantic segmentation algorithm which effectively fuses information from images and 3D point clouds is proposed which incorporates information from multiple scales in an intuitive and effective manner and is evaluated on the publicly available KITTI dataset. Expand
The Mapillary Vistas Dataset for Semantic Understanding of Street Scenes
TLDR
The Mapillary Vistas Dataset is a novel, large-scale street-level image dataset containing 25000 high-resolution images annotated into 66 object categories with additional, instance-specific labels for 37 classes, aiming to significantly further the development of state-of-the-art methods for visual road-scene understanding. Expand
SEGCloud: Semantic Segmentation of 3D Point Clouds
TLDR
SEGCloud is presented, an end-to-end framework to obtain 3D point-level segmentation that combines the advantages of NNs, trilinear interpolation(TI) and fully connected Conditional Random Fields (FC-CRF). Expand
Large-Scale Point Cloud Semantic Segmentation with Superpoint Graphs
TLDR
It is argued that the organization of 3D point clouds can be efficiently captured by a structure called superpoint graph (SPG), derived from a partition of the scanned scene into geometrically homogeneous elements. Expand
The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes
TLDR
This paper generates a synthetic collection of diverse urban images, named SYNTHIA, with automatically generated class annotations, and conducts experiments with DCNNs that show how the inclusion of SYnTHIA in the training stage significantly improves performance on the semantic segmentation task. Expand
Paris-Lille-3D: A large and high-quality ground-truth urban point cloud dataset for automatic segmentation and classification
This paper introduces a new urban point cloud dataset for automatic segmentation and classification acquired by mobile laser scanning (MLS). We describe how the dataset is obtained from acquisitionExpand
The Cityscapes Dataset for Semantic Urban Scene Understanding
TLDR
This work introduces Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling, and exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Expand
Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer
TLDR
This paper annotates static 3D scene elements with rough bounding primitives and develops a model which transfers this information into the image domain and reveals that 3D information enables more efficient annotation while at the same time resulting in improved accuracy and time-coherent labels. Expand
Semantic3D.net: A new Large-scale Point Cloud Classification Benchmark
TLDR
It is hoped this http URL will pave the way for deep learning methods in 3D point cloud labelling to learn richer, more general 3D representations, and first submissions after only a few months indicate that this might indeed be the case. Expand
SqueezeSeg: Convolutional Neural Nets with Recurrent CRF for Real-Time Road-Object Segmentation from 3D LiDAR Point Cloud
TLDR
An end-to-end pipeline called SqueezeSeg based on convolutional neural networks (CNN), which takes a transformed LiDAR point cloud as input and directly outputs a point-wise label map, which is then refined by a conditional random field (CRF) implemented as a recurrent layer. Expand
...
1
2
3
4
5
...