PPF-FoldNet: Unsupervised Learning of Rotation Invariant 3D Local Descriptors

@article{Deng2018PPFFoldNetUL,
  title={PPF-FoldNet: Unsupervised Learning of Rotation Invariant 3D Local Descriptors},
  author={Haowen Deng and Tolga Birdal and Slobodan Ilic},
  journal={ArXiv},
  year={2018},
  volume={abs/1808.10322}
}
We present PPF-FoldNet for unsupervised learning of 3D local descriptors on pure point cloud geometry. Based on the folding-based auto-encoding of well known point pair features, PPF-FoldNet offers many desirable properties: it necessitates neither supervision, nor a sensitive local reference frame, benefits from point-set sparsity, is end-to-end, fast, and can extract powerful rotation invariant descriptors. Thanks to a novel feature visualization, its evolution can be monitored to provide… 

DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF Relocalization

TLDR
A Siamese network that jointly learns 3D local feature detection and description directly from raw 3D points and integrates FlexConv and Squeeze-and-Excitation to assure that the learned local descriptor captures multi-level geometric information and channel-wise relations.

WSDesc: Weakly Supervised 3D Local Descriptor Learning for Point Cloud Registration

TLDR
This work proposes a novel registration loss based on the deviation from rigidity of 3D transformations, and the loss is weakly supervised by the prior knowledge that the input point clouds have partial overlap, without requiring ground-truth alignment information.

SpinNet: Learning a General Surface Descriptor for 3D Point Cloud Registration

TLDR
This paper introduces a new, yet conceptually simple, neural architecture, termed SpinNet, to extract local features which are rotationally invariant whilst sufficiently informative to enable accurate registration.

UPDesc: Unsupervised Point Descriptor Learning for Robust Registration

TLDR
This work builds upon a recent supervised 3D CNN-based descriptor extraction framework, namely, 3DSmoothNet, which leverages a voxel-based representation to parameterize the surrounding geometry of interest points, and proposes UPDesc, an unsupervised method to learn point descriptors for robust point cloud registration.

Distinctive 3D local deep descriptors

  • F. PoiesiD. Boscaini
  • Computer Science
    2020 25th International Conference on Pattern Recognition (ICPR)
  • 2021
TLDR
3D local deep descriptors are extracted, canonicalised with respect to their estimated local reference frame and encoded into rotation-invariant compact descriptors by a PointNet-based deep neural network to be used to register point clouds without requiring an initial alignment.

Learning an Effective Equivariant 3D Descriptor Without Supervision

TLDR
The benefits of taking a step back in the direction of end-to-end learning of 3D descrip- tors are explored by disentangling the creation of a robust and distinctive rotation equivariant representation, which can be learned from unoriented input data, and the definition of a good canonical orientation.

A Rotation-Invariant Framework for Deep Point Cloud Analysis

TLDR
A new low-level purely rotation-invariant representation to replace common 3D Cartesian coordinates as the network inputs is introduced, and a network architecture is presented to embed these representations into features, encoding local relations between points and their neighbors, and the global shape structure.

Rotation-Invariant Local-to-Global Representation Learning for 3D Point Cloud

We propose a local-to-global representation learning algorithm for 3D point cloud data, which is appropriate to handle various geometric transformations, especially rotation, without explicit data

Learning general and distinctive 3D local deep descriptors for point cloud registration.

  • F. PoiesiD. Boscaini
  • Computer Science
    IEEE transactions on pattern analysis and machine intelligence
  • 2022
TLDR
These descriptors outperform most recent descriptors by a large margin in terms of generalisation, and also become the state of the art in benchmarks where training and testing are performed in the same domain.

You Only Hypothesize Once: Point Cloud Registration with Rotation-equivariant Descriptors

TLDR
This paper proposes a novel local descriptor-based framework, called YOHO, for the registration of two unaligned point clouds, which achieves the rotation invariance by recent technologies of group equivariant feature learning, which brings more robustness to point density and noise.
...

References

SHOWING 1-10 OF 56 REFERENCES

3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions

TLDR
3DMatch is presented, a data-driven model that learns a local volumetric patch descriptor for establishing correspondences between partial 3D data that consistently outperforms other state-of-the-art approaches by a significant margin.

FoldingNet: Point Cloud Auto-Encoder via Deep Grid Deformation

TLDR
A novel end-to-end deep auto-encoder is proposed to address unsupervised learning challenges on point clouds, and is shown, in theory, to be a generic architecture that is able to reconstruct an arbitrary point cloud from a 2D grid.

Learning Local Shape Descriptors from Part Correspondences with Multiview Convolutional Networks

TLDR
A new local descriptor for 3D shapes is presented, directly applicable to a wide range of shape analysis problems such as point correspondences, semantic segmentation, affordance prediction, and shape-to-scan matching by a convolutional network trained to embed geometrically and semantically similar points close to one another in descriptor space.

PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space

TLDR
A hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set and proposes novel set learning layers to adaptively combine features from multiple scales to learn deep point set features efficiently and robustly.

PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation

TLDR
This paper designs a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.

Neighbors Do Help: Deeply Exploiting Local Structures of Point Clouds

TLDR
Two new operations to improve PointNet with more efficient exploitation of local structures are presented, one focuses on local 3D geometric structures and the other exploits local feature structures by recursive feature aggregation on a nearest-neighbor-graph computed from 3D positions.

Fast Point Feature Histograms (FPFH) for 3D registration

TLDR
This paper modifications their mathematical expressions and performs a rigorous analysis on their robustness and complexity for the problem of 3D registration for overlapping point cloud views, and proposes an algorithm for the online computation of FPFH features for realtime applications.

3D Point Cloud Registration for Localization Using a Deep Neural Network Auto-Encoder

We present an algorithm for registration between a large-scale point cloud and a close-proximity scanned point cloud, providing a localization solution that is fully independent of prior information

Learning Representations and Generative Models for 3D Point Clouds

TLDR
A deep AutoEncoder network with state-of-the-art reconstruction quality and generalization ability is introduced with results that outperform existing methods on 3D recognition tasks and enable shape editing via simple algebraic manipulations.

Frustum PointNets for 3D Object Detection from RGB-D Data

TLDR
This work directly operates on raw point clouds by popping up RGBD scans and leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects.
...