Dynamic Plane Convolutional Occupancy Networks

@article{Lionar2021DynamicPC,
  title={Dynamic Plane Convolutional Occupancy Networks},
  author={S. Lionar and Daniil Emtsev and Dusan Svilarkovic and Songyou Peng},
  journal={2021 IEEE Winter Conference on Applications of Computer Vision (WACV)},
  year={2021},
  pages={1828-1837}
}
Learning-based 3D reconstruction using implicit neural representations has shown promising progress not only at the object level but also in more complicated scenes. In this paper, we propose Dynamic Plane Convolutional Occupancy Networks, a novel implicit representation pushing further the quality of 3D surface reconstruction. The input noisy point clouds are encoded into per-point features that are projected onto multiple 2D dynamic planes. A fully-connected network learns to predict plane… 

Figures and Tables from this paper

POCO: Point Convolution for Surface Reconstruction
TLDR
This work proposes to use point cloud convolutions and compute latent vectors at each input point, and performs a learning-based interpolation on nearest neighbors using inferred weights, which significantly outperforms other methods on most classical metrics.
NeuralBlox: Real-Time Neural Representation Fusion for Robust Volumetric Mapping
TLDR
This work proposes a fusion strategy and training pipeline to incrementally build and update neural implicit representations that enable the reconstruction of large scenes from sequential partial observations that are significantly more robust in yielding a better scene completeness given noisy inputs.
SE(3)-Equivariant Attention Networks for Shape Reconstruction in Function Space
TLDR
It is shown that by training only on single objects and without any pre-segmentation, the first SE(3)-equivariant coordinate-based network for learning occupancy fields from point clouds can reconstruct a novel scene with single-object performance.
Projection-Based Point Convolution for Efficient Point Cloud Segmentation
TLDR
Projection-based Point Convolution (PPConv), a point convolutional module that uses 2D convolutions and multi-layer perceptrons (MLPs) as its components, achieves superior efficiency compared to state-of-the-art methods, even with a simple architecture based on PointNet++.
Shape As Points: A Differentiable Poisson Solver
TLDR
A differentiable point-to-mesh layer is introduced using a differentiable formulation of Poisson Surface Reconstruction (PSR) that allows for a GPUaccelerated fast solution of the indicator function given an oriented point cloud.

References

SHOWING 1-10 OF 42 REFERENCES
Convolutional Occupancy Networks
TLDR
Convolutional Occupancy Networks is proposed, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes that enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.
Tangent Convolutions for Dense Prediction in 3D
TLDR
Using tangent convolutions, this work designs a deep fully-convolutional network for semantic segmentation of 3D point clouds, and applies it to challenging real-world datasets of indoor and outdoor 3D environments.
Occupancy Networks: Learning 3D Reconstruction in Function Space
TLDR
This paper proposes Occupancy Networks, a new representation for learning-based 3D reconstruction methods that encodes a description of the 3D output at infinite resolution without excessive memory footprint, and validate that the representation can efficiently encode 3D structure and can be inferred from various kinds of input.
VoxNet: A 3D Convolutional Neural Network for real-time object recognition
  • Daniel Maturana, S. Scherer
  • Computer Science
    2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
  • 2015
TLDR
VoxNet is proposed, an architecture to tackle the problem of robust object recognition by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network (3D CNN).
Hierarchical Surface Prediction for 3D Object Reconstruction
TLDR
This work proposes a general framework, called hierarchical surface prediction (HSP), which facilitates prediction of high resolution voxel grids, and shows that high resolution predictions are more accurate than low resolution predictions.
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
TLDR
This paper designs a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.
Differentiable Volumetric Rendering: Learning Implicit 3D Representations Without 3D Supervision
TLDR
This work proposes a differentiable rendering formulation for implicit shape and texture representations, showing that depth gradients can be derived analytically using the concept of implicit differentiation, and finds that this method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.
ScanNet: Richly-Annotated 3D Reconstructions of Indoor Scenes
TLDR
This work introduces ScanNet, an RGB-D video dataset containing 2.5M views in 1513 scenes annotated with 3D camera poses, surface reconstructions, and semantic segmentations, and shows that using this data helps achieve state-of-the-art performance on several 3D scene understanding tasks.
FPConv: Learning Local Flattening for Point Convolution
TLDR
FPConv is introduced, a novel surface-style convolution operator designed for 3D point cloud analysis and can be a complementary of volumetric convolutions and jointly training them can further boost overall performance into state-of-the-art results.
PointConv: Deep Convolutional Networks on 3D Point Clouds
TLDR
The dynamic filter is extended to a new convolution operation, named PointConv, which can be applied on point clouds to build deep convolutional networks and is able to achieve state-of-the-art on challenging semantic segmentation benchmarks on 3D point clouds.
...
...