Sign-Agnostic Implicit Learning of Surface Self-Similarities for Shape Modeling and Reconstruction from Raw Point Clouds

@article{Zhao2021SignAgnosticIL,
  title={Sign-Agnostic Implicit Learning of Surface Self-Similarities for Shape Modeling and Reconstruction from Raw Point Clouds},
  author={Wenbin Zhao and Jiabao Lei and Yuxin Wen and Jianguo Zhang and Kui Jia},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={10251-10260}
}
  • Wenbin Zhao, Jiabao Lei, K. Jia
  • Published 14 December 2020
  • Computer Science
  • 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Shape modeling and reconstruction from raw point clouds of objects stand as a fundamental challenge in vision and graphics research. Classical methods consider analytic shape priors; however, their performance is degraded when the scanned points deviate from the ideal conditions of cleanness and completeness. Important progress has been recently made by data-driven approaches, which learn global and/or local models of implicit surface representations from auxiliary sets of training shapes… 

Figures and Tables from this paper

Learning Parallel Dense Correspondence from Spatio-Temporal Descriptors for Efficient and Robust 4D Reconstruction

TLDR
This work presents a novel pipeline to learn a temporal evolution of the 3D human shape through spatially continuous transformation functions among cross-frame occupancy fields via explicitly learning continuous displacement vector fields from robust spatio-temporal shape representations.

Neural Wavelet-domain Diffusion for 3D Shape Generation

TLDR
A compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets is proposed, enabling direct generative modeling on a continuous implicit representation in wavelet domain.

Latent Partition Implicit with Surface Codes for 3D Representation

TLDR
The insight here is that both the part learning and the part blending can be conducted much easier in the latent space than in the spatial space, which means that LPI outperforms the latest methods under the widely used benchmarks in terms of reconstruction accuracy and modeling interpretability.

POCO: Point Convolution for Surface Reconstruction — Supplementary material —

TLDR
FKAConv [4] is used as convolutional backbone, with default parameters (number of layers, number of layer channels), and the latent vector size n was changed, i.e., the output dimension of the backbone was changed to 32.

Surface Reconstruction from Point Clouds: A Survey and a Benchmark

TLDR
The present paper contributes a large-scale benchmarking dataset consisting of both synthetic and real-scanned data, and conducts thorough empirical studies by comparing existing methods on the constructed benchmark, and paying special attention on robustness of existing methods against various scanning imperfections.

Surface Reconstruction from Point Clouds by Learning Predictive Context Priors

TLDR
Predictive Context Priors is introduced by learning Predictive Queries for each specific point cloud at inference time, and the query prediction enables the learned local context prior over the entire prior space, rather than being restricted to the query locations, and this improves the generalizability.

Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors

TLDR
The key idea is to infer signed distances by pushing both the query projections to be on the surface and the projection distance to be the minimum, which achieves state-of-the-art reconstruction accuracy, especially for sparse point clouds.

GIFS: Neural Implicit Function for General Shape Representation

TLDR
Instead of dividing 3D space into predefined inside-outside regions, GIFS encodes whether two points are separated by any surface, which outperforms previous state-of-the-art methods in terms of reconstruction quality, rendering efficiency, and visual fidelity.

Deep Surface Reconstruction from Point Clouds with Visibility Information

Most current neural networks for reconstructing surfaces from point clouds ignore sensor poses and only operate on raw point locations. Sensor visibility, however, holds meaningful information

POCO: Point Convolution for Surface Reconstruction

TLDR
This work proposes to use point cloud convolutions and compute latent vectors at each input point, and performs a learning-based interpolation on nearest neighbors using inferred weights, which significantly outperforms other methods on most classical metrics.

References

SHOWING 1-10 OF 46 REFERENCES

Points2Surf: Learning Implicit Surfaces from Point Cloud Patches

TLDR
This work presents Points2Surf, a novel patch-based learning framework that produces accurate surfaces directly from raw scans without normals at the cost of longer computation times and a slight increase in small-scale topological noise in some cases.

SAL: Sign Agnostic Learning of Shapes From Raw Data

  • Matan AtzmonY. Lipman
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
This paper introduces Sign Agnostic Learning (SAL), a deep learning approach for learning implicit shape representations directly from raw, unsigned geometric data, such as point clouds and triangle soups, and believes it opens the door to many geometric deep learning applications with real-world data.

PCPNet Learning Local Shape Properties from Raw Point Clouds

TLDR
The utility of the PCPNET approach in the context of shape reconstruction is demonstrated, by showing how it can be used to extract normal orientation information from point clouds.

Implicit Geometric Regularization for Learning Shapes

TLDR
It is observed that a rather simple loss function, encouraging the neural network to vanish on the input point cloud and to have a unit norm gradient, possesses an implicit geometric regularization property that favors smooth and natural zero level set surfaces, avoiding bad zero-loss solutions.

SAL++: Sign Agnostic Learning with Derivatives

TLDR
SAL++ is introduced: a method for learning implicit neural representations of shapes directly from such raw data through a novel sign agnostic regression loss, incorporating both pointwise values and gradients of the unsigned distance function.

Local Implicit Grid Representations for 3D Scenes

TLDR
This paper introduces Local Implicit Grid Representations, a new 3D shape representation designed for scalability and generality and demonstrates the value of this proposed approach for 3D surface reconstruction from sparse point observations, showing significantly better results than alternative approaches.

SkeletonNet: A Topology-Preserving Solution for Learning Mesh Reconstruction of Object Surfaces From RGB Images

TLDR
Two models are proposed, the Skeleton-Based Graph Convolutional Neural Network (SkeGCNN and SkeDISN), which respectively build upon and improve over the existing frameworks of explicit mesh deformation and implicit field learning for the downstream surface reconstruction task.

A Point Set Generation Network for 3D Object Reconstruction from a Single Image

TLDR
This paper addresses the problem of 3D reconstruction from a single image, generating a straight-forward form of output unorthordox, and designs architecture, loss function and learning paradigm that are novel and effective, capable of predicting multiple plausible 3D point clouds from an input image.

Learning Implicit Fields for Generative Shape Modeling

  • Zhiqin ChenHao Zhang
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
By replacing conventional decoders by the implicit decoder for representation learning and shape generation, this work demonstrates superior results for tasks such as generative shape modeling, interpolation, and single-view 3D reconstruction, particularly in terms of visual quality.

AtlasNet: A Papier-M\^ach\'e Approach to Learning 3D Surface Generation

TLDR
A method for learning to generate the surface of 3D shapes as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape.