Fast 3D Semantic Mapping in Road Scenes

@article{Li2019Fast3S,
  title={Fast 3D Semantic Mapping in Road Scenes},
  author={Xuanpeng Li and Dong Wang and Huanxuan Ao and Rachid Belaroussi and Dominique Gruyer},
  journal={Applied Sciences},
  year={2019}
}
Fast 3D reconstruction with semantic information in road scenes is of great requirements for autonomous navigation. It involves issues of geometry and appearance in the field of computer vision. In this work, we propose a fast 3D semantic mapping system based on the monocular vision by fusion of localization, mapping, and scene parsing. From visual sequences, it can estimate the camera pose, calculate the depth, predict the semantic segmentation, and finally realize the 3D semantic mapping. Our… 
Semantic 3D Reconstruction with Learning MVS and 2D Segmentation of Aerial Images
TLDR
A graph-based semantic fusion procedure and refinement based on local and global information can suppress and reduce the re-projection error.
Occupancy grid mapping for rover navigation based on semantic segmentation
TLDR
The measurements of a stereo camera combined with a pixel labeling technique based on Convolution Neural Networks to identify the presence of rocky obstacles in planetary environment and the estimation of the relative pose between successive frames is carried out using ORB-SLAM algorithm.
Semantic Information for Robot Navigation: A Survey
TLDR
This work presents a survey on the concepts, methodologies and techniques that allow including semantic information in robot navigation systems, paying attention to the two main groups: human-assisted and autonomous techniques.
Bidirectional Sliding Window for Boundary Recognition of Pavement Construction Area Using GPS-RTK
TLDR
Experiments show that when the proposed BSW algorithm is used and the single-point positioning accuracy is at the centimeter level, PCA boundary recognition for straight polygons reaches single- point positioning accuracy, and that for curved polygons reaching decimeter-level accuracy.
Development of a Low Cost and Path-free Autonomous Patrol System Based on Stereo Vision System and Checking Flags
TLDR
This research proposes a simplified autonomous patrolling robot, fabricated by upgrading a wheeling household robot with stereo vision system (SVS), radio frequency identification (RFID) module, and laptop, which has four functions: independent patrolling without path planning, checking, intruder detection, and wireless backup.
Special Features on Intelligent Imaging and Analysis
Intelligent imaging and analysis have been studied in various research fields, including medical imaging, biomedical applications, computer vision, visual inspection and robot systems [...]

References

SHOWING 1-10 OF 44 REFERENCES
Fast semi-dense 3D semantic mapping with monocular visual SLAM
TLDR
This work addresses the challenge of fast 3D reconstruction with semantic information on road scenarios by fusion of direct Simultaneous Localisation and Mapping from a monocular camera in a semi-dense way and the state-of-the-art approaches of deep neural network.
Dense 3D semantic mapping of indoor scenes from RGB-D images
TLDR
A novel 2D-3D label transfer based on Bayesian updates and dense pairwise 3D Conditional Random Fields and it is shown that it is not needed to obtain a semantic segmentation for every frame in a sequence in order to create accurate semantic 3D reconstructions.
Urban 3D semantic modelling using stereo vision
In this paper we propose a robust algorithm that generates an efficient and accurate dense 3D reconstruction with associated semantic labellings. Intelligent autonomous systems require accurate 3D
Joint Semantic Segmentation and 3D Reconstruction from Monocular Video
TLDR
Improved 3D structure and temporally consistent semantic segmentation for difficult, large scale, forward moving monocular image sequences is demonstrated.
Scene flow propagation for semantic mapping and object discovery in dynamic street scenes
TLDR
A method that incrementally fuses stereo frame observations into temporally consistent semantic 3D maps that allows for advanced reasoning on objects despite noisy single-frame observations and occlusions is proposed.
SemanticFusion: Dense 3D semantic mapping with convolutional neural networks
TLDR
This work combines Convolutional Neural Networks (CNNs) and a state-of-the-art dense Simultaneous Localization and Mapping (SLAM) system, ElasticFusion, which provides long-term dense correspondences between frames of indoor RGB-D video even during loopy scanning trajectories, and produces a useful semantic 3D map.
Incremental dense semantic stereo fusion for large-scale semantic scene reconstruction
TLDR
This paper presents what to their knowledge is the first system that can perform dense, large-scale, outdoor semantic reconstruction of a scene in (near) real time and presents a `semantic fusion' approach that allows us to handle dynamic objects more effectively than previous approaches.
Mesh Based Semantic Modelling for Indoor and Outdoor Scenes
TLDR
This work proposes a principled way to generate object labelling in 3D by building a triangulated meshed representation of the scene from multiple depth estimates and defining a CRF over this mesh, which is able to capture the consistency of geometric properties of the objects present in the scene.
3D all the way: Semantic segmentation of urban scenes from start to end in 3D
TLDR
It is shown that a properly trained pure-3D approach produces high quality labelings, with significant speed benefits allowing us to analyze entire streets in a matter of minutes, and a novel facade separation based on semantic nuances between facades is proposed.
SLAM++: Simultaneous Localisation and Mapping at the Level of Objects
TLDR
The object graph enables predictions for accurate ICP-based camera to model tracking at each live frame, and efficient active search for new objects in currently undescribed image regions, as well as the generation of an object level scene description with the potential to enable interaction.
...
...