PointGrow: Autoregressively Learned Point Cloud Generation with Self-Attention

  title={PointGrow: Autoregressively Learned Point Cloud Generation with Self-Attention},
  author={Yongbin Sun and Yue Wang and Ziwei Liu and Joshua E. Siegel and Sanjay E. Sarma},
  journal={2020 IEEE Winter Conference on Applications of Computer Vision (WACV)},
  • Yongbin SunYue Wang S. Sarma
  • Published 12 October 2018
  • Computer Science
  • 2020 IEEE Winter Conference on Applications of Computer Vision (WACV)
Generating 3D point clouds is challenging yet highly desired. This work presents a novel autoregressive model, PointGrow, which can generate diverse and realistic point cloud samples from scratch or conditioned on semantic contexts. This model operates recurrently, with each point sampled according to a conditional distribution given its previously-generated points, allowing inter-point correlations to be well-exploited and 3D shape generative processes to be better interpreted. Since point… 

Figures and Tables from this paper

PointFlow: 3D Point Cloud Generation With Continuous Normalizing Flows

A principled probabilistic framework to generate 3D point clouds by modeling them as a distribution of distributions with the invertibility of normalizing flows enables the computation of the likelihood during training and allows the model to train in the variational inference framework.

Self-Supervised Learning of Point Clouds via Orientation Estimation

This paper leverages 3D self-supervision for learning downstream tasks on point clouds with fewer labels and demonstrates that its approach outperforms the state-of-the-art.

PointOT: Interpretable Geometry-Inspired Point Cloud Generative Model via Optimal Transport

A geometry-inspired point cloud generative framework called PointOT is designed, which decouples the generative model into two separate sub-tasks: manifold learning of the point cloud and distribution transformation and the semi-continuous optimal transportation (SCOT) mapping.

Data Augmentation-free Unsupervised Learning for 3D Point Cloud Understanding

An augmentation-free unsupervised approach for point clouds to learn transferable point-level features via soft clustering, named SoftClu, which exploits the affiliation of points to their clusters as a proxy to enable self-training through a pseudo-label prediction task.

Point Cloud Generation with Continuous Conditioning

A novel generative adversarial network (GAN) setup that generates 3D point cloud shapes conditioned on a continuous parameter by using the concept of auxiliary classifier GANs in a multi-task setting.

Go with the Flows: Mixtures of Normalizing Flows for Point Cloud Generation and Reconstruction

This work generalizes prior work by introducing additional discrete latent variable in mixture model, and demonstrates that by adding data augmentation, individual mixture components can learn to specialize in a semantically meaningful manner.

Geometric Back-Projection Network for Point Cloud Classification

This work uses an idea of error-correcting feedback structure to capture the local features of point clouds comprehensively and applies CNN-based structures in high-level feature spaces to learn local geometric context implicitly.

General Hypernetwork Framework for Creating 3D Point Clouds

This work proposes a novel method for generating 3D point clouds that leverages the properties of hypernetworks and extends it by incorporating flow-based models, which results in a novel HyperFlow approach.

Geometric Feedback Network for Point Cloud Classification

A network designed as a feedback mechanism, a procedure allowing the modification of the output via as a response to the output, to comprehensively capture the local features of 3D point clouds is proposed.

Learning Gradient Fields for Shape Generation

This work generates point clouds by performing stochastic gradient ascent on an unnormalized probability density, thereby moving sampled points toward the high-likelihood regions and allowing for extraction of a high-quality implicit surface.



Multiresolution Tree Networks for 3D Point Cloud Processing

This model represents a 3D shape as a set of locality-preserving 1D ordered list of points at multiple resolutions, which allows efficient feed-forward processing through 1D convolutions, coarse-to-fine analysis through a multi-grid architecture, and it leads to faster convergence and small memory footprint during training.

Attentional ShapeContextNet for Point Cloud Recognition

The resulting model, called ShapeContextNet, consists of a hierarchy with modules not relying on a fixed grid while still enjoying properties similar to those in convolutional neural networks - being able to capture and propagate the object part information.

PU-Net: Point Cloud Upsampling Network

A data-driven point cloud upsampling technique to learn multi-level features per point and expand the point set via a multi-branch convolution unit implicitly in feature space, which shows that its upsampled points have better uniformity and are located closer to the underlying surfaces.

Representation Learning and Adversarial Generation of 3D Point Clouds

This paper introduces a deep autoencoder network for point clouds, which outperforms the state of the art in 3D recognition tasks, and designs GAN architectures to generate novel point-clouds.

PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation

This paper designs a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.

Escape from Cells: Deep Kd-Networks for the Recognition of 3D Point Cloud Models

A new deep learning architecture that is designed for 3D model recognition tasks and works with unstructured point clouds, Kd-networks, which demonstrates competitive performance in a number of shape recognition tasks such as shape classification, shape retrieval and shape part segmentation.

FoldingNet: Point Cloud Auto-Encoder via Deep Grid Deformation

A novel end-to-end deep auto-encoder is proposed to address unsupervised learning challenges on point clouds, and is shown, in theory, to be a generic architecture that is able to reconstruct an arbitrary point cloud from a 2D grid.

RGCNN: Regularized Graph CNN for Point Cloud Segmentation

A regularized graph convolutional neural network (RGCNN) that directly consumes point clouds is proposed that significantly reduces the computational complexity while achieving competitive performance with the state of the art.

Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction

This paper uses 2D convolutional operations to predict the 3D structure from multiple viewpoints and jointly apply geometric reasoning with 2D projection optimization, and introduces the pseudo-renderer, a differentiable module to approximate the true rendering operation, to synthesize novel depth maps for optimization.

Deep Learning on Point Sets for 3 D Classification and Segmentation

  • C. Qi
  • Computer Science
  • 2016
This paper designs a novel type of neural network that directly consumes point clouds and well respects the permutation invariance of points in the input, and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.