• Corpus ID: 229297958

FG-Net: Fast Large-Scale LiDAR Point CloudsUnderstanding Network Leveraging CorrelatedFeature Mining and Geometric-Aware Modelling

  title={FG-Net: Fast Large-Scale LiDAR Point CloudsUnderstanding Network Leveraging CorrelatedFeature Mining and Geometric-Aware Modelling},
  author={Kangcheng Liu and Zhi Gao and Feng Lin and Ben M. Chen},
This work presents FG-Net, a general deep learning framework for large-scale point clouds understanding without voxelizations, which achieves accurate and real-time performance with a single NVIDIA GTX 1080 GPU. First, a novel noise and outlier filtering method is designed to facilitate subsequent high-level tasks. For effective understanding purpose, we propose a deep convolutional neural network leveraging correlated feature mining and deformable convolution based geometric-aware modelling… 
APP-Net: Auxiliary-point-based Push and Pull Operations for Efficient Point Cloud Classification
The con-structed network is more efficient than all the previous baselines with a clear margin while only occupying a low memory, and experiments on both synthetic and real datasets verify that APP-Net reaches comparable ac-curacies with other networks.
AGNet: An Attention-Based Graph Network for Point Cloud Classification and Segmentation
A graph-based neural network with an attention pooling strategy (AGNet) that can aggregate more information to better represent different point cloud features and demonstrate a consistent advantage for the tasks of point set classification and segmentation.
PointCrack3D: Crack Detection in Unstructured Environments using a 3D-Point-Cloud-Based Deep Neural Network
PointCrack3D is presented, a new 3D-point-cloud-based crack detection algorithm for unstructured surfaces that demonstrates a crack detection rate of 97% overall and 100% for cracks with a maximum width of more than 3 cm, significantly outperforming the state of the art.
A new weakly supervised approach for ALS point cloud semantic segmentation
Efficient Urban-scale Point Clouds Segmentation with BEV Projection
This work proposes to transfer the 3D point clouds to dense bird’s-eye-view projection, and designs an attention-based fusion network that can conduct multi-modal learning on the projected images to generate 3D semantic segmentation results.
Multi Point-Voxel Convolution (MPVConv) for Deep Learning on Point Clouds
This work proposes a new convolutional neural network, called Multi Point-Voxel Convolution (MPVConv), for deep learning on point clouds, which achieves better accuracy than the newest point-voxel-based model PVCNN (a model more efficient than PointNet) with lower latency.


Point Cloud Oversegmentation With Graph-Structured Deep Metric Learning
We propose a new supervized learning framework for oversegmenting 3D point clouds into superpoints. We cast this problem as learning deep embeddings of the local geometry and radiometry of 3D points,
Paris-Lille-3D: A large and high-quality ground-truth urban point cloud dataset for automatic segmentation and classification
This paper introduces a new urban point cloud dataset for automatic segmentation and classification acquired by mobile laser scanning (MLS). We describe how the dataset is obtained from acquisition
RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds
This paper introduces RandLA-Net, an efficient and lightweight neural architecture to directly infer per-point semantics for large-scale point clouds, and introduces a novel local feature aggregation module to progressively increase the receptive field for each 3D point, thereby effectively preserving geometric details.
4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks
This work creates an open-source auto-differentiation library for sparse tensors that provides extensive functions for high-dimensional convolutional neural networks and proposes the hybrid kernel, a special case of the generalized sparse convolution, and trilateral-stationary conditional random fields that enforce spatio-temporal consistency in the 7D space-time-chroma space.
3D Semantic Segmentation with Submanifold Sparse Convolutional Networks
This work introduces new sparse convolutional operations that are designed to process spatially-sparse data more efficiently, and uses them to develop Spatially-Sparse Convolutional networks, which outperform all prior state-of-the-art models on two tasks involving semantic segmentation of 3D point clouds.
PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space
A hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set and proposes novel set learning layers to adaptively combine features from multiple scales to learn deep point set features efficiently and robustly.
3D Semantic Parsing of Large-Scale Indoor Spaces
This paper argues that identification of structural elements in indoor spaces is essentially a detection problem, rather than segmentation which is commonly used, and proposes a method for semantic parsing the 3D point cloud of an entire building using a hierarchical approach.
The Chinese University of Hong Kong
  • Materials Science, Psychology
  • 2021
Stroke is a leading cause of long-term disabilities in the world. Soft robots for stroke rehabilitation are increasingly being employed owing to their inherent safety, light weight, and portability.
FKAConv: Feature-Kernel Alignment for Point Cloud Convolution
This paper provides a formulation to relate and analyze a number of point convolution methods, and proposes its own convolution variant, that separates the estimation of geometry-less kernel weights and their alignment to the spatial support of features.
Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution
This work proposes Sparse Point-Voxel Convolution (SPVConv), a lightweight 3D module that equips the vanilla Sparse Convolution with the high-resolution point-based branch, and presents 3D Neural Architecture Search (3D-NAS) to search the optimal network architecture over this diverse design space efficiently and effectively.