PU-Net: Point Cloud Upsampling Network

@article{Yu2018PUNetPC,
  title={PU-Net: Point Cloud Upsampling Network},
  author={Lequan Yu and Xianzhi Li and Chi-Wing Fu and Daniel Cohen-Or and Pheng-Ann Heng},
  journal={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2018},
  pages={2790-2799}
}
  • Lequan Yu, Xianzhi Li, P. Heng
  • Published 21 January 2018
  • Computer Science
  • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Learning and analyzing 3D point clouds with deep networks is challenging due to the sparseness and irregularity of the data. [] Key Method Our network is applied at a patch-level, with a joint loss function that encourages the upsampled points to remain on the underlying surface with a uniform distribution. We conduct various experiments using synthesis and scan data to evaluate our method and demonstrate its superiority over some baseline methods and an optimization-based method. Results show that our…

Figures and Tables from this paper

PU-Flow: a Point Cloud Upsampling Networkwith Normalizing Flows
TLDR
A novel deep learning-based model, called PU-Flow, which incorporates normalizing flows and weight prediction techniques to produce dense points uniformly distributed on the underlying surface, and outperforms state-of-the-art methods in terms of reconstruction quality, proximity-to-surface accuracy, and computation efficiency.
PU-Transformer: Point Cloud Upsampling Transformer
TLDR
To activate the transformer’s strong capability in representing features, a new variant of a multi-head self-attention structure is developed to enhance both point-wise and channel-wise relations of the feature map.
PUGeo-Net: A Geometry-centric Network for 3D Point Cloud Upsampling
This paper addresses the problem of generating uniform dense point clouds to describe the underlying geometric structures from given sparse point clouds. Due to the irregular and unordered nature,
Meta-PU: An Arbitrary-Scale Upsampling Network for Point Cloud
TLDR
This work proposes a novel method called ``Meta-PU" to firstly support point cloud upsampling of arbitrary scale factors with a single model and even outperforms the existing methods trained for a specific scale factor only.
SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine Reconstruction with Self-Projection Optimization
TLDR
This work proposes a self-supervised point cloud upsampling network, named SPU-Net, which achieves comparable performance to the state-of-the-art supervised methods and proposes a coarse-to-fine reconstruction framework, which contains two main components: point feature extraction and point feature expansion.
PU-GAN: A Point Cloud Upsampling Adversarial Network
TLDR
A new point cloud upsampling network called PU-GAN, which is formulated based on a generative adversarial network (GAN), to learn a rich variety of point distributions from the latent space and upsample points over patches on object surfaces.
Deep Magnification-Arbitrary Upsampling over 3D Point Clouds
TLDR
This paper addresses the problem of generating dense point clouds from given sparse point clouds to model the underlying geometric structures of objects/scenes and proposes a novel end-to-end learning based framework, namely MAPU-Net, a single neural network with one-time training that can handle an arbitrary upsampling factor.
SSPU-Net: Self-Supervised Point Cloud Upsampling via Differentiable Rendering
TLDR
A self-supervised point cloud upsampling network (SSPU-Net) to generate dense point clouds without using ground truth to exploit the consistency between the input sparse point cloud and generated dense point cloud for the shapes and rendered images.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 43 REFERENCES
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
TLDR
This paper designs a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.
Deep Learning on Point Sets for 3 D Classification and Segmentation
  • C. Qi
  • Computer Science
  • 2016
TLDR
This paper designs a novel type of neural network that directly consumes point clouds and well respects the permutation invariance of points in the input, and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.
PointCNN
TLDR
This work proposes to learn a X-transformation from the input points, which is used for simultaneously weighting the input features associated with the points and permuting them into latent potentially canonical order, and calls it PointCNN.
PCPNet Learning Local Shape Properties from Raw Point Clouds
TLDR
The utility of the PCPNET approach in the context of shape reconstruction is demonstrated, by showing how it can be used to extract normal orientation information from point clouds.
Representation Learning and Adversarial Generation of 3D Point Clouds
TLDR
This paper introduces a deep autoencoder network for point clouds, which outperforms the state of the art in 3D recognition tasks, and designs GAN architectures to generate novel point-clouds.
Pointwise Convolutional Neural Networks
TLDR
This paper presents a convolutional neural network for semantic segmentation and object recognition with 3D point clouds, and at the core of this network is point-wise convolution, a new convolution operator that can be applied at each point of a point cloud.
PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space
TLDR
A hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set and proposes novel set learning layers to adaptively combine features from multiple scales to learn deep point set features efficiently and robustly.
Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction
TLDR
This paper uses 2D convolutional operations to predict the 3D structure from multiple viewpoints and jointly apply geometric reasoning with 2D projection optimization, and introduces the pseudo-renderer, a differentiable module to approximate the true rendering operation, to synthesize novel depth maps for optimization.
A Point Set Generation Network for 3D Object Reconstruction from a Single Image
TLDR
This paper addresses the problem of 3D reconstruction from a single image, generating a straight-forward form of output unorthordox, and designs architecture, loss function and learning paradigm that are novel and effective, capable of predicting multiple plausible 3D point clouds from an input image.
PointCNN: Convolution On $\mathcal{X}$-Transformed Points
TLDR
The proposed method is a generalization of typical CNNs to feature learning from point clouds, thus it is called PointCNN, and experiments show that it achieves on par or better performance than state-of-the-art methods on multiple challenging benchmark datasets and tasks.
...
1
2
3
4
5
...