• Corpus ID: 246063356

CP-Net: Contour-Perturbed Reconstruction Network for Self-Supervised Point Cloud Learning

  title={CP-Net: Contour-Perturbed Reconstruction Network for Self-Supervised Point Cloud Learning},
  author={Mingye Xu and Zhipeng Zhou and Hongbin Xu and Yali Wang and Yu Qiao},
—Self-supervised learning has not been fully explored for point cloud analysis. Current frameworks are mainly based on point cloud reconstruction. Given only 3D coordinates, such approaches tend to learn local geometric structures and contours, while failing in understanding high level semantic content. Consequently, they achieve unsatisfactory performance in downstream tasks such as classification, segmentation, etc. To fill this gap, we propose a generic Contour-Perturbed Reconstruction Network… 
Masked Surfel Prediction for Self-Supervised Point Cloud Learning
This work makes the first attempt, to the best of the knowledge, to consider the local geometry information explicitly into the masked auto-encoding, and proposes a novel Masked Surfel Prediction (MaskSurf) method, which outperforms its closest competitor, Point-MAE, by 1.2% on the real-world dataset of ScanObjectNN under the OBJ-BG setting.


FoldingNet: Point Cloud Auto-Encoder via Deep Grid Deformation
A novel end-to-end deep auto-encoder is proposed to address unsupervised learning challenges on point clouds, and is shown, in theory, to be a generic architecture that is able to reconstruct an arbitrary point cloud from a 2D grid.
Self-Contrastive Learning with Hard Negative Sampling for Self-supervised Point Cloud Learning
A novel self-contrastive learning for self-supervised point cloud representation learning, aiming to capture both local geometric patterns and nonlocal semantic primitives based on the nonlocal self-similarity of point clouds.
Multi-Angle Point Cloud-VAE: Unsupervised Feature Learning for 3D Point Clouds From Multiple Angles by Joint Self-Reconstruction and Half-to-Half Prediction
The outperforming results in four shape analysis tasks show that MAP-VAE can learn more discriminative global or local features than the state-of-the-art methods.
Learning Geometry-Disentangled Representation for Complementary Understanding of 3D Object Point Cloud
GDANet introduces Geometry-Disentangle Module to dynamically disentangle point clouds into the contour and flat part of 3D objects, respectively denoted by sharp and gentle variation components, and exploits Sharp-Gentle Complementary Attention Module that regards the features from sharp and Gentle variation components as two holistic representations.
Dynamic Graph CNN for Learning on Point Clouds
This work proposes a new neural network module suitable for CNN-based high-level tasks on point clouds, including classification and segmentation called EdgeConv, which acts on graphs dynamically computed in each layer of the network.
Geometry Sharing Network for 3D Point Cloud Classification and Segmentation
This work proposes Geometry Sharing Network (GS-Net) which effectively learns point descriptors with holistic context to enhance the robustness to geometric transformations and shows the nearest neighbors of each point in Eigenvalue space are invariant to rotation and translation.
PointWise: An Unsupervised Point-wise Feature Learning Network
A deep learning framework to learn point-wise description from a set of shapes without supervision that leverages self-supervision to define a relevant loss function to learn rich per-point features and demonstrates the ability of the method to capture meaningful point- Wise features through three applications.
L2G Auto-encoder: Understanding Point Clouds by Local-to-Global Reconstruction with Hierarchical Self-Attention
Local-to-Global auto-encoder (L2G-AE) is proposed to simultaneously learn the local and global structure of point clouds by local to global reconstruction to understand point clouds better than state-of-the-art methods.
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
This paper designs a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.
PointContrast: Unsupervised Pre-training for 3D Point Cloud Understanding
This work aims at facilitating research on 3D representation learning by selecting a suite of diverse datasets and tasks to measure the effect of unsupervised pre-training on a large source set of 3D scenes and achieving improvement over recent best results in segmentation and detection across 6 different benchmarks.