Corpus ID: 235829810

HDMapNet: An Online HD Map Construction and Evaluation Framework

  title={HDMapNet: An Online HD Map Construction and Evaluation Framework},
  author={Qi Li and Yue Wang and Yilun Wang and Hang Zhao},
  • Qi Li, Yue Wang, +1 author Hang Zhao
  • Published 2021
  • Computer Science
  • ArXiv
High-definition map (HD map) construction is a crucial problem for autonomous driving. This problem typically involves collecting high-quality point clouds, fusing multiple point clouds of the same scene, annotating map elements, and updating maps constantly. This pipeline, however, requires a vast amount of human efforts and resources which limits its scalability. Additionally, traditional HD maps are coupled with centimeter-level accurate localization which is unreliable in many scenarios [1… Expand

Figures and Tables from this paper


A Survey on 3D LiDAR Localization for Autonomous Vehicles
The latest finding in 3D LiDAR localization for autonomous driving cars is reviewed, and the results obtained by each method are analyzed in an effort to guide the research community towards the path that seems to be the most promising. Expand
Cross-View Semantic Segmentation for Sensing Surroundings
A novel visual task called Cross-view Semantic Segmentation as well as a framework named View Parsing Network (VPN) to address it and the experimental results show that the model can effectively make use of the information from different views and multi-modalities to understanding spatial information. Expand
Lift, Splat, Shoot: Encoding Images From Arbitrary Camera Rigs by Implicitly Unprojecting to 3D
In pursuit of the goal of learning dense representations for motion planning, it is shown that the representations inferred by the model enable interpretable end-to-end motion planning by "shooting" template trajectories into a bird's-eye-view cost map output by the network. Expand
Predicting Semantic Map Representations From Images Using Pyramid Occupancy Networks
  • Thomas Roddick, R. Cipolla
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
This work presents a simple, unified approach for estimating birds-eye-view maps of their environment directly from monocular images using a single end-to-end deep learning architecture. Expand
Restricted Deformable Convolution-Based Road Scene Semantic Segmentation Using Surround View Cameras
This paper addresses 360-degree road scene semantic segmentation using surround view cameras, which are widely equipped in existing production cars and proposes Restricted Deformable Convolution (RDC), which can effectively model geometric transformations by learning the shapes of convolutional filters conditioned on the input feature map. Expand
Rethinking model scaling for convolutional neural networks
  • 2020
VectorNet: Encoding HD Maps and Agent Dynamics From Vectorized Representation
  • Jiyang Gao, Chen Sun, +4 authors C. Schmid
  • Computer Science, Mathematics
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
VectorNet is introduced, a hierarchical graph neural network that first exploits the spatial locality of individual road components represented by vectors and then models the high-order interactions among all components and obtains state-of-the-art performance on the Argoverse dataset. Expand
3D-LaneNet: End-to-End 3D Multiple Lane Detection
This work marks a first attempt to address this task with on-board sensing without assuming a known constant lane width or relying on pre-mapped environments, and applies two new concepts: intra-network inverse-perspective mapping (IPM) and anchor-based lane representation. Expand
PointPillars: Fast Encoders for Object Detection From Point Clouds
benchmarks suggest that PointPillars is an appropriate encoding for object detection in point clouds, and proposes a lean downstream network. Expand
A robust pose graph approach for city scale LiDAR mapping
A refined structure of the factor graph considering systematical initialization bias is introduced, where the scan-matching factors are twice validated through a novel classifier and a robust optimization strategy for reconstructing globally consistent 3D High-Definition maps at city scale. Expand