Dense 3D Point Cloud Reconstruction Using a Deep Pyramid Network

  title={Dense 3D Point Cloud Reconstruction Using a Deep Pyramid Network},
  author={Priyanka Mandikal and R. Venkatesh Babu},
  journal={2019 IEEE Winter Conference on Applications of Computer Vision (WACV)},
Reconstructing a high-resolution 3D model of an object is a challenging task in computer vision. Designing scalable and light-weight architectures is crucial while addressing this problem. Existing point-cloud based reconstruction approaches directly predict the entire point cloud in a single stage. Although this technique can handle low-resolution point clouds, it is not a viable solution for generating dense, high-resolution outputs. In this work, we introduce DensePCR, a deep pyramidal… 

Figures and Tables from this paper

Attention-Based Dense Point Cloud Reconstruction From a Single Image

Evaluated through evaluation of both synthetic and real-world datasets, it is demonstrated that the proposed two-stage training dense point cloud generation network outperforms state of the art works in dense point clouds generation.

Visual Enhanced 3D Point Cloud Reconstruction from A Single Image

Experimental results demonstrate that the proposed method outperforms existing techniques significantly, both qualitatively and quantitatively, and has fewer training parameters.

Pix2Point: Learning Outdoor 3D Using Sparse Point Clouds and Optimal Transport

Pix2Point is proposed, a deep learning-based approach for monocular 3D point cloud prediction, able to deal with complete and challenging outdoor scenes, and it is shown that, when trained on sparse point clouds, this simple promising approach achieves a better coverage of 3D outdoor scenes than efficient monocular depth methods.

Point Cloud Upsampling and Normal Estimation using Deep Learning for Robust Surface Reconstruction

A compound loss function is proposed that encourages the network to estimate points that lie on a surface including normals accurately predicting the orientation of the surface, and shows the benefit of estimating normals together with point positions.

High-Resolution Point Cloud Reconstruction from a Single Image by Redescription

This paper first combines reconstruction and upsampling networks to generate high-resolution point cloud and achieve joint optimization through phased training, and presents an image redescription mechanism to achieve the bidirectional correlation and enhance the semantic consistency between images and point clouds.

CAPNet: Continuous Approximation Projection For 3D Point Cloud Reconstruction Using 2D Supervision

A novel differentiable projection module, called ‘CAPNet’, is introduced to obtain 2D masks from a predicted 3D point cloud reconstruction, and significantly outperform the existing projection based approaches on a large-scale synthetic dataset.

GRNet: Gridding Residual Network for Dense Point Cloud Completion

This work devise two novel differentiable layers, named Gridding and Gridding Reverse, to convert between point clouds and 3D grids without losing structural information, and presents the differentiable Cubic Feature Sampling layer to extract features of neighboring points, which preserves context information.

N-DPC: Dense 3D Point Cloud Completion Based on Improved Multi-Stage Network

A novel dense point cloud completion network called N-DPC, which combines self attention unit with the fusion of local feature and global feature information, which shows a good robustness for different missing ratios of point clouds.

Latent-Space Laplacian Pyramids for Adversarial Representation Learning with 3D Point Clouds

This work combines the recently proposed latent-space GAN and Laplacian GAN architectures to form a multi-scale model capable of generating 3D point clouds at increasing levels of detail and demonstrates that this model outperforms the existing generative models for 3Dpoint clouds.



Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction

This paper uses 2D convolutional operations to predict the 3D structure from multiple viewpoints and jointly apply geometric reasoning with 2D projection optimization, and introduces the pseudo-renderer, a differentiable module to approximate the true rendering operation, to synthesize novel depth maps for optimization.

CAPNet: Continuous Approximation Projection For 3D Point Cloud Reconstruction Using 2D Supervision

A novel differentiable projection module, called ‘CAPNet’, is introduced to obtain 2D masks from a predicted 3D point cloud reconstruction, and significantly outperform the existing projection based approaches on a large-scale synthetic dataset.

A Point Set Generation Network for 3D Object Reconstruction from a Single Image

This paper addresses the problem of 3D reconstruction from a single image, generating a straight-forward form of output unorthordox, and designs architecture, loss function and learning paradigm that are novel and effective, capable of predicting multiple plausible 3D point clouds from an input image.

PU-Net: Point Cloud Upsampling Network

A data-driven point cloud upsampling technique to learn multi-level features per point and expand the point set via a multi-branch convolution unit implicitly in feature space, which shows that its upsampled points have better uniformity and are located closer to the underlying surfaces.

PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation

This paper designs a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.

3D-PSRNet: Part Segmented 3D Point Cloud Reconstruction From a Single Image

It is demonstrated that jointly training for both reconstruction and segmentation leads to improved performance in both the tasks, when compared to training for each task individually.

3D-LMNet: Latent Embedding Matching for Accurate and Diverse 3D Point Cloud Reconstruction from a Single Image

3D-LMNet, a latent embedding matching approach for 3D reconstruction, is proposed, which outperform state-of-the-art approaches on the task of single-view3D reconstruction on both real and synthetic datasets while generating multiple plausible reconstructions, demonstrating the generalizability and utility of the approach.

SPLATNet: Sparse Lattice Networks for Point Cloud Processing

  • Hang SuV. Jampani J. Kautz
  • Computer Science
    2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2018
A network architecture for processing point clouds that directly operates on a collection of points represented as a sparse set of samples in a high-dimensional lattice that outperforms existing state-of-the-art techniques on 3D segmentation tasks.

3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction

The 3D-R2N2 reconstruction framework outperforms the state-of-the-art methods for single view reconstruction, and enables the 3D reconstruction of objects in situations when traditional SFM/SLAM methods fail (because of lack of texture and/or wide baseline).

OctNetFusion: Learning Depth Fusion from Data

This paper presents a novel 3D CNN architecture that learns to predict an implicit surface representation from the input depth maps and significantly outperforms the traditional volumetric fusion approach in terms of noise reduction and outlier suppression.