• Corpus ID: 235358236

FINet: Dual Branches Feature Interaction for Partial-to-Partial Point Cloud Registration

  title={FINet: Dual Branches Feature Interaction for Partial-to-Partial Point Cloud Registration},
  author={Hao Xu and Nianjin Ye and Shuaicheng Liu and Guanghui Liu and Bing Zeng},
Data association is important in the point cloud registration. In this work, we propose to solve the partial-to-partial registration from a new perspective, by introducing multi-level feature interactions between the source and the reference clouds at the feature extraction stage, such that the registra- tion can be realized without the attentions or explicit mask estimation for the overlapping detection as adopted previ- ously. Specifically, we present FINet, a feature interaction-based… 

Figures and Tables from this paper


PRNet: Self-Supervised Learning for Partial-to-Partial Registration
This work uses deep networks to tackle non-convexity of the alignment and partial correspondence problem in partial-to-partial point cloud registration, and shows PRNet predicts keypoints and correspondences consistently across views and objects.
PointNetLK: Robust & Efficient Point Cloud Registration Using PointNet
It is argued that PointNet itself can be thought of as a learnable "imaging" function, and classical vision algorithms for image alignment can be brought to bear on the problem -- namely the Lucas & Kanade (LK) algorithm.
PCRNet: Point Cloud Registration Network using PointNet Encoding
A novel framework that uses the PointNet representation to align point clouds and perform registration for applications such as tracking, 3D reconstruction and pose estimation is presented.
Deep Closest Point: Learning Representations for Point Cloud Registration
  • Yue Wang, J. Solomon
  • Computer Science
    2019 IEEE/CVF International Conference on Computer Vision (ICCV)
  • 2019
This work proposes a learning-based method, titled Deep Closest Point (DCP), inspired by recent techniques in computer vision and natural language processing, that provides a state-of-the-art registration technique and evaluates the suitability of the learned features transferred to unseen objects.
Feature-Metric Registration: A Fast Semi-Supervised Approach for Robust Point Cloud Registration Without Correspondences
A fast feature-metric point cloud registration framework, which enforces the optimisation of registration by minimising a feature-Metric projection error without correspondences, which obtains higher accuracy and robustness than the state-of-the-art methods.
Fast Point Feature Histograms (FPFH) for 3D registration
This paper modifications their mathematical expressions and performs a rigorous analysis on their robustness and complexity for the problem of 3D registration for overlapping point cloud views, and proposes an algorithm for the online computation of FPFH features for realtime applications.
D3Feat: Joint Learning of Dense Detection and Description of 3D Local Features
This paper proposes a keypoint selection strategy that overcomes the inherent density variations of 3D point clouds, and proposes a self-supervised detector loss guided by the on-the-fly feature matching results during training.
DeepGMR: Learning Latent Gaussian Mixture Models for Registration
Deep Gaussian Mixture Registration (DeepGMR) is introduced, the first learning-based registration method that explicitly leverages a probabilistic registration paradigm by formulating registration as the minimization of KL-divergence between two probability distributions modeled as mixtures of Gaussians.
3DFeat-Net: Weakly Supervised Local 3D Features for Point Cloud Registration
The 3DFeat-Net is proposed which learns both 3D feature detector and descriptor for point cloud matching using weak supervision and obtains state-of-the-art performance on these gravity-aligned datasets.
DenseFusion: 6D Object Pose Estimation by Iterative Dense Fusion
DenseFusion is a generic framework for estimating 6D pose of a set of known objects from RGB-D images that processes the two data sources individually and uses a novel dense fusion network to extract pixel-wise dense feature embedding, from which the pose is estimated.