SAL: Sign Agnostic Learning of Shapes From Raw Data

  title={SAL: Sign Agnostic Learning of Shapes From Raw Data},
  author={Matan Atzmon and Yaron Lipman},
  journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  • Matan Atzmon, Y. Lipman
  • Published 23 November 2019
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Recently, neural networks have been used as implicit representations for surface reconstruction, modelling, learning, and generation. So far, training neural networks to be implicit representations of surfaces required training data sampled from a ground-truth signed implicit functions such as signed distance or occupancy functions, which are notoriously hard to compute. In this paper we introduce Sign Agnostic Learning (SAL), a deep learning approach for learning implicit shape representations… 

Figures and Tables from this paper

SAL++: Sign Agnostic Learning with Derivatives
SAL++ is introduced: a method for learning implicit neural representations of shapes directly from such raw data through a novel sign agnostic regression loss, incorporating both pointwise values and gradients of the unsigned distance function.
LightSAL: Lightweight Sign Agnostic Learning for Implicit Surface Representation
This work proposes LightSAL, a novel deep convolutional architecture for learning 3D shapes, based on the recent concept of Sign Agnostic Learning for training the proposed network, relying on signed distance fields, with unsigned distance as ground truth.
Sign-Agnostic CONet: Learning Implicit Surface Reconstructions by Sign-Agnostic Optimization of Convolutional Occupancy Networks
This paper proposes to learn implicit surface reconstruction by sign-agnostic optimization of convolutional occupancy networks, to simultaneously achieve advanced scalability, generality, and applicability in a unified framework and shows this goal can be effectively achieved by a simple yet effective design.
Sign-Agnostic Implicit Learning of Surface Self-Similarities for Shape Modeling and Reconstruction from Raw Point Clouds
With a global post-optimization of local sign flipping, SAIL-S3 is able to directly model raw, un-oriented point clouds and reconstruct high-quality object surfaces and Experiments show its superiority over existing methods.
Neural-IMLS: Learning Implicit Moving Least-Squares for Surface Reconstruction from Unoriented Point clouds
This paper introduces Neural-IMLS, a novel approach that learns the noise-resistant signed distance function (SDF) directly from unoriented raw point clouds and proves that when the couple of SDFs coincide, the neural network can predict a signed implicit function whose zero level-set serves as a good approximation to the underlying surface.
SA-ConvONet: Sign-Agnostic Optimization of Convolutional Occupancy Networks
This work proposes to learn implicit surface reconstruction by sign-agnostic optimization of convolutional occupancy networks, to simultaneously achieve advanced scalability to large-scale scenes, generality to novel shapes, and applicability to raw scans in a unified framework.
Implicit Geometric Regularization for Learning Shapes
It is observed that a rather simple loss function, encouraging the neural network to vanish on the input point cloud and to have a unit norm gradient, possesses an implicit geometric regularization property that favors smooth and natural zero level set surfaces, avoiding bad zero-loss solutions.
Neural Unsigned Distance Fields for Implicit Function Learning
This work proposes Neural Distance Fields (NDF), a neural network based model which predicts the unsigned distance field for arbitrary 3D shapes given sparse point clouds, and finds NDF can be used for multi-target regression with techniques that have been exclusively used for rendering in graphics.
PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations
This paper introduces a novel method to learn this patch-based representation in a canonical space, such that it is as object-agnostic as possible and can be trained using much fewer shapes, compared to existing approaches.


Learning 3D Shape Completion from Laser Scan Data with Weak Supervision
This work proposes a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision and is able to compete with a fully supervised baseline and a state-of-the-art data-driven approach while being significantly faster.
Occupancy Networks: Learning 3D Reconstruction in Function Space
This paper proposes Occupancy Networks, a new representation for learning-based 3D reconstruction methods that encodes a description of the 3D output at infinite resolution without excessive memory footprint, and validate that the representation can efficiently encode 3D structure and can be inferred from various kinds of input.
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
This paper designs a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.
SurfNet: Generating 3D Shape Surfaces Using Deep Residual Networks
This work develops a procedure to create consistent shape surface of a category of 3D objects, and uses this consistent representation for category-specific shape surface generation from a parametric representation or an image by developing novel extensions of deep residual networks for the task of geometry image generation.
Deep Learning 3D Shape Surfaces Using Geometry Images
This work qualitatively and quantitatively validate that creating geometry images using authalic parametrization on a spherical domain is suitable for robust learning of 3D shape surfaces, and proposes a way to implicitly learn the topology and structure of3D shapes using geometry images encoded with suitable features.
DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation
This work introduces DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data.
Deep Geometric Prior for Surface Reconstruction
This work proposes the use of a deep neural network as a geometric prior for surface reconstruction, and overfit a neural network representing a local chart parameterization to part of an input point cloud using the Wasserstein distance as a measure of approximation.
Deformable Shape Completion with Graph Convolutional Autoencoders
This work proposes a novel learning-based method for the completion of partial shapes using a variational autoencoder with graph convolutional operations that learns a latent space for complete realistic shapes that best fits the generated shape to the known partial input.
A Papier-Mache Approach to Learning 3D Surface Generation
This work introduces a method for learning to generate the surface of 3D shapes as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape.
Multi-chart generative surface modeling
A 3D shape generative model based on deep neural networks that learns the shape distribution and is able to generate novel shapes, interpolate shapes, and explore the generated shape space for human body and bone (teeth) shape generation is introduced.