• Corpus ID: 91183970

A Dataset for Semantic Segmentation of Point Cloud Sequences

@article{Behley2019ADF,
  title={A Dataset for Semantic Segmentation of Point Cloud Sequences},
  author={Jens Behley and Martin Garbade and Andres Milioto and Jan Quenzel and Sven Behnke and C. Stachniss and Juergen Gall},
  journal={ArXiv},
  year={2019},
  volume={abs/1904.01416}
}
Semantic scene understanding is important for various applications. [] Key Method We annotated all sequences of the KITTI Vision Odometry Benchmark and provide dense point-wise annotations for the complete $360^{o}$ field-of-view of the employed automotive LiDAR.
MNEW: Multi-domain Neighborhood Embedding and Weighting for Sparse Point Clouds Segmentation
TLDR
A new method called MNEW is proposed, including multi-domain neighborhood embedding, and attention weighting based on their geometry distance, feature similarity, and neighborhood sparsity, which achieves the top performance for sparse point clouds, which is important to the application of LiDAR-based automated driving perception.
Shooting Labels: 3D Semantic Labeling by Virtual Reality
TLDR
This work proposes Shooting Labels, the first 3D labeling tool for dense 3D semantic segmentation which exploits Virtual Reality to render the labeling task as easy and fun as playing a video-game.
CarlaScenes: A synthetic dataset for odometry in autonomous driving
TLDR
The dataset is oriented to address the challenging odometry scenarios that cause the current state of art odometers to deviate from their normal operations and in-cludes data with semantic annotation at the instance level for both image and lidar.
Few-Shot Point Cloud Region Annotation with Human in the Loop
TLDR
It is shown that the proposed framework significantly reduces the amount of human interaction needed in annotating point clouds, without sacrificing on the quality of the annotations.
3D Segmentation Learning From Sparse Annotations and Hierarchical Descriptors
TLDR
GIDSeg, a novel approach that can simultaneously learn segmentation from sparse annotations via reasoning global-regional structures and individual-vicinal properties, achieves superior performance over state-of-the-art for inferencing 3D dense segmentation with only sparse annotations.
MotionSC: Data Set and Network for Real-Time Semantic Mapping in Dynamic Environments
—This work addresses a gap in semantic scene comple- tion (SSC) data by creating a novel outdoor data set with accurate and complete dynamic scenes. Our data set is formed from randomly sampled views
GIDSeg: Learning 3D Segmentation from Sparse Annotations via Hierarchical Descriptors
  • Ellen Yi-Ge, Yilong Zhu
  • Computer Science
    2020 2nd International Conference on Information Technology and Computer Application (ITCA)
  • 2020
TLDR
GIDSeg, a novel approach that can simultaneously learn segmentation from sparse annotations via reasoning global-regional structures and individual-vicinal properties, achieves superior performance over state-of-the-art for inferencing 3D dense segmentation with only sparse annotations.
Shooting Labels by Virtual Reality
TLDR
This work proposes a new tool based on Virtual Reality (VR) which makes semantic annotation of 3D data as easy and fun as a video game and allows for projecting the 3D annotations into 2D images, thereby speeding up a notoriously slow and expensive task such as pixel-wise semantic labeling.
Driving Datasets Literature Review
TLDR
This report is a survey of the different autonomous driving datasets which have been published up to date and describes the diverse driving tasks explored by the datasets.

References

SHOWING 1-10 OF 65 REFERENCES
Sensor fusion for semantic segmentation of urban scenes
TLDR
A semantic segmentation algorithm which effectively fuses information from images and 3D point clouds is proposed which incorporates information from multiple scales in an intuitive and effective manner and is evaluated on the publicly available KITTI dataset.
The Mapillary Vistas Dataset for Semantic Understanding of Street Scenes
TLDR
The Mapillary Vistas Dataset is a novel, large-scale street-level image dataset containing 25000 high-resolution images annotated into 66 object categories with additional, instance-specific labels for 37 classes, aiming to significantly further the development of state-of-the-art methods for visual road-scene understanding.
SEGCloud: Semantic Segmentation of 3D Point Clouds
TLDR
SEGCloud is presented, an end-to-end framework to obtain 3D point-level segmentation that combines the advantages of NNs, trilinear interpolation(TI) and fully connected Conditional Random Fields (FC-CRF).
Large-Scale Point Cloud Semantic Segmentation with Superpoint Graphs
TLDR
It is argued that the organization of 3D point clouds can be efficiently captured by a structure called superpoint graph (SPG), derived from a partition of the scanned scene into geometrically homogeneous elements.
The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes
TLDR
This paper generates a synthetic collection of diverse urban images, named SYNTHIA, with automatically generated class annotations, and conducts experiments with DCNNs that show how the inclusion of SYnTHIA in the training stage significantly improves performance on the semantic segmentation task.
The Cityscapes Dataset for Semantic Urban Scene Understanding
TLDR
This work introduces Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling, and exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity.
Paris-Lille-3D: A large and high-quality ground-truth urban point cloud dataset for automatic segmentation and classification
This paper introduces a new urban point cloud dataset for automatic segmentation and classification acquired by mobile laser scanning (MLS). We describe how the dataset is obtained from acquisition
Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer
TLDR
This paper annotates static 3D scene elements with rough bounding primitives and develops a model which transfers this information into the image domain and reveals that 3D information enables more efficient annotation while at the same time resulting in improved accuracy and time-coherent labels.
Semantic3D.net: A new Large-scale Point Cloud Classification Benchmark
TLDR
It is hoped this http URL will pave the way for deep learning methods in 3D point cloud labelling to learn richer, more general 3D representations, and first submissions after only a few months indicate that this might indeed be the case.
SqueezeSeg: Convolutional Neural Nets with Recurrent CRF for Real-Time Road-Object Segmentation from 3D LiDAR Point Cloud
TLDR
An end-to-end pipeline called SqueezeSeg based on convolutional neural networks (CNN), which takes a transformed LiDAR point cloud as input and directly outputs a point-wise label map, which is then refined by a conditional random field (CRF) implemented as a recurrent layer.
...
...