• Corpus ID: 1745976

PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space

  title={PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space},
  author={C. Qi and L. Yi and Hao Su and Leonidas J. Guibas},
Few prior works study deep learning on point sets. [] Key Method With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the…

Figures and Tables from this paper

Local Spectral Graph Convolution for Point Set Feature Learning

This article replaces the standard max pooling step with a recursive clustering and pooling strategy, devised to aggregate information from within clusters of nodes that are close to one another in their spectral coordinates, leading to richer overall feature descriptors.

PointWise: An Unsupervised Point-wise Feature Learning Network

A deep learning framework to learn point-wise description from a set of shapes without supervision that leverages self-supervision to define a relevant loss function to learn rich per-point features and demonstrates the ability of the method to capture meaningful point- Wise features through three applications.

Deep Cascade Generation on Point Sets

This paper proposes a deep cascade network to generate 3D geometry of an object on a point cloud, consisting of a set of permutation-insensitive points, and develops a dynamically-weighted loss function for jointly penalizing the generation output of cascade levels at different training stages in a coarse-to-fine manner.


This work proposes to learn a X-transformation from the input points, which is used for simultaneously weighting the input features associated with the points and permuting them into latent potentially canonical order, and calls it PointCNN.

PointContrast: Unsupervised Pre-training for 3D Point Cloud Understanding

This work aims at facilitating research on 3D representation learning by selecting a suite of diverse datasets and tasks to measure the effect of unsupervised pre-training on a large source set of 3D scenes and achieving improvement over recent best results in segmentation and detection across 6 different benchmarks.

MetaSets: Meta-Learning on Point Sets for Generalizable Representations

MetaSets is proposed, which meta-learns point cloud representations from a group of classification tasks on carefully-designed transformed point sets containing specific geometry priors, which are more generalizable to various unseen domains of different geometries.

Deep Neural Network for Point Sets Based on Local Feature Integration

The model proposed has bridged the gap between classical networks and point cloud processing and is comparable or even better than most existing methods for classification and segmentation tasks, and has good local feature integration ability.

On Learning the Right Attention Point for Feature Enhancement

A novel attention-based mechanism to learn enhanced point features for point cloud processing tasks, e.g., classification and segmentation, and a new and simple convolution, which combines convolutional features from an input point and its corresponding learned attention point, or LAP, for short.

Neighbors Do Help: Deeply Exploiting Local Structures of Point Clouds

Two new operations to improve PointNet with more efficient exploitation of local structures are presented, one focuses on local 3D geometric structures and the other exploits local feature structures by recursive feature aggregation on a nearest-neighbor-graph computed from 3D positions.

LSANet: Feature Learning on Point Sets by Local Spatial Attention

This work designs a novel Local Spatial Attention (LSA) module to adaptively generate attention maps according to the spatial distribution of local regions and proposes the Spatial Feature Extractor (SFE), which constructs a branch architecture, to aggregate the spatial information with associated features in each layer of the network better.



PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation

This paper designs a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.

Dynamic Graph CNN for Learning on Point Clouds

This work proposes a new neural network module suitable for CNN-based high-level tasks on point clouds, including classification and segmentation called EdgeConv, which acts on graphs dynamically computed in each layer of the network.

A-CNN: Annularly Convolutional Neural Networks on Point Clouds

A new method to define and compute convolution directly on 3D point clouds by the proposed annular convolution that can better capture the local neighborhood geometry of each point by specifying the (regular and dilated) ring-shaped structures and directions in the computation.

Very Deep Convolutional Networks for Large-Scale Image Recognition

This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

3D ShapeNets: A deep representation for volumetric shapes

This work proposes to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network, and shows that this 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.

Deep learning with geodesic moments for 3D shape classification

Spectral Networks and Locally Connected Networks on Graphs

This paper considers possible generalizations of CNNs to signals defined on more general domains without the action of a translation group, and proposes two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian.

Network In Network

With enhanced local modeling via the micro network, the proposed deep network structure NIN is able to utilize global average pooling over feature maps in the classification layer, which is easier to interpret and less prone to overfitting than traditional fully connected layers.

SyncSpecCNN: Synchronized Spectral CNN for 3D Shape Segmentation

This paper introduces a spectral parametrization of dilated convolutional kernels and a spectral transformer network that enables weight sharing by parametrizing kernels in the spectral domain spanned by graph Laplacian eigenbases and strives to overcome two key challenges: how to share coefficients and conduct multi-scale analysis in different parts of the graph for a single shape.