• Corpus ID: 227209266

Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces

@inproceedings{Ma2021NeuralPullLS,
  title={Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces},
  author={Baorui Ma and Zhizhong Han and Yu-Shen Liu and Matthias Zwicker},
  booktitle={ICML},
  year={2021}
}
Reconstructing continuous surfaces from 3D point clouds is a fundamental operation in 3D geometry processing. Several recent state-of-the-art methods address this problem using neural networks to learn signed distance functions (SDFs). In this paper, we introduce Neural-Pull, a new approach that is simple and leads to high quality SDFs. Specifically, we train a neural network to pull query 3D locations to their closest neighbors on the surface using the predicted signed distance values and the… 
Neural-IMLS: Learning Implicit Moving Least-Squares for Surface Reconstruction from Unoriented Point clouds
TLDR
This paper introduces Neural-IMLS, a novel approach that learns the noise-resistant signed distance function (SDF) directly from unoriented raw point clouds and proves that when the couple of SDFs coincide, the neural network can predict a signed implicit function whose zero level-set serves as a good approximation to the underlying surface.
Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors
TLDR
The key idea is to infer signed distances by pushing both the query projections to be on the surface and the projection distance to be the minimum, which achieves state-of-the-art reconstruction accuracy, especially for sparse point clouds.
Semi-signed neural fitting for surface reconstruction from unoriented point clouds
Reconstructing 3D geometry from unoriented point clouds can benefit many downstream tasks. Recent methods mostly adopt a neural shape representation with a neural network to represent a signed
Facial Geometric Detail Recovery via Implicit Representation
TLDR
This work presents a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image and registers the implicit shape details to a 3D Morphable Model template, which can be used in traditional modeling and rendering pipelines.
Surface Reconstruction from Point Clouds by Learning Predictive Context Priors
TLDR
Predictive Context Priors is introduced by learning Predictive Queries for each specific point cloud at inference time, and the query prediction enables the learned local context prior over the entire prior space, rather than being restricted to the query locations, and this improves the generalizability.
Minimal Neural Atlas: Parameterizing Complex Surfaces with Minimal Charts and Distortion
TLDR
This work presents Minimal Neural Atlas, a novel atlas-based explicit neural surface representation given by an implicit probabilistic occupancy field defined on an open square of the parametric space that can learn a minimal atlas of 3 charts with distortion-minimal parameterization for surfaces of arbitrary topology.
A Repulsive Force Unit for Garment Collision Handling in Neural Networks
TLDR
This work proposes a novel collision handling neural network layer called Repulsive Force Unit (ReFU), which predicts the per-vertex offsets that push any interpenetrating vertex to a collision-free configuration while preserving the fine geometric details.
Few 'Zero Level Set'-Shot Learning of Shape Signed Distance Functions in Feature Space
TLDR
This work combines two types of implicit neural network conditioning mechanisms simultaneously for the first time, namely feature encoding and meta-learning, and shows that in the context of implicit reconstruction from a sparse point cloud, this strategy outperforms existing alternatives, namely standard supervised learning in feature space, andMeta-learning in euclidean space, while still providing fast inference.
Leveraging Monocular Disparity Estimation for Single-View Reconstruction
TLDR
After creating a 3D point cloud from disparity, a method to combine the new point cloud with existing information to form a more faithful and detailed 3D geometry is introduced.
POCO: Point Convolution for Surface Reconstruction — Supplementary material —
TLDR
FKAConv [4] is used as convolutional backbone, with default parameters (number of layers, number of layer channels), and the latent vector size n was changed, i.e., the output dimension of the backbone was changed to 32.
...
...

References

SHOWING 1-10 OF 88 REFERENCES
Points2Surf Learning Implicit Surfaces from Point Clouds.
TLDR
Points2Surf is presented, a novel patch-based learning framework that produces accurate surfaces directly from raw scans without normals at the cost of longer computation times and a slight increase in small-scale topological noise in some cases.
Occupancy Networks: Learning 3D Reconstruction in Function Space
TLDR
This paper proposes Occupancy Networks, a new representation for learning-based 3D reconstruction methods that encodes a description of the 3D output at infinite resolution without excessive memory footprint, and validate that the representation can efficiently encode 3D structure and can be inferred from various kinds of input.
A Papier-Mache Approach to Learning 3D Surface Generation
TLDR
This work introduces a method for learning to generate the surface of 3D shapes as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape.
Neural Unsigned Distance Fields for Implicit Function Learning
TLDR
This work proposes Neural Distance Fields (NDF), a neural network based model which predicts the unsigned distance field for arbitrary 3D shapes given sparse point clouds, and finds NDF can be used for multi-target regression with techniques that have been exclusively used for rendering in graphics.
Local Implicit Grid Representations for 3D Scenes
TLDR
This paper introduces Local Implicit Grid Representations, a new 3D shape representation designed for scalability and generality and demonstrates the value of this proposed approach for 3D surface reconstruction from sparse point observations, showing significantly better results than alternative approaches.
Deep Level Sets: Implicit Surface Representations for 3D Shape Inference
TLDR
An end-to-end trainable model is proposed that directly predicts implicit surface representations of arbitrary topology by optimising a novel geometric loss function and incorporating this in a deep end- to-end learning framework by introducing a variational shape inference formulation.
DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation
TLDR
This work introduces DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data.
Deep Geometric Prior for Surface Reconstruction
TLDR
This work proposes the use of a deep neural network as a geometric prior for surface reconstruction, and overfit a neural network representing a local chart parameterization to part of an input point cloud using the Wasserstein distance as a measure of approximation.
Deep Marching Cubes: Learning Explicit Surface Representations
TLDR
This paper demonstrates that the marching cubes algorithm is not differentiable and proposes an alternative differentiable formulation which is inserted as a final layer into a 3D convolutional neural network, and proposes a set of loss functions which allow for training the model with sparse point supervision.
ShapeNet: An Information-Rich 3D Model Repository
TLDR
ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy, a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations.
...
...