Corpus ID: 236318384

A Deep Signed Directional Distance Function for Object Shape Representation

@article{Zobeidi2021ADS,
  title={A Deep Signed Directional Distance Function for Object Shape Representation},
  author={Ehsan Zobeidi and Nikolay A. Atanasov},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.11024}
}
Neural networks that map 3D coordinates to signed distance function (SDF) or occupancy values have enabled high-fidelity implicit representations of object shape. This paper develops a new shape model that allows synthesizing novel distance views by optimizing a continuous signed directional distance function (SDDF). Similar to deep SDF models, our SDDF formulation can represent whole categories of shapes and complete or interpolate across shapes from partial input data. Unlike an SDF, which… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 48 REFERENCES
DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation
TLDR
This work introduces DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data. Expand
SurfNet: Generating 3D Shape Surfaces Using Deep Residual Networks
TLDR
This work develops a procedure to create consistent shape surface of a category of 3D objects, and uses this consistent representation for category-specific shape surface generation from a parametric representation or an image by developing novel extensions of deep residual networks for the task of geometry image generation. Expand
3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions
TLDR
3DMatch is presented, a data-driven model that learns a local volumetric patch descriptor for establishing correspondences between partial 3D data that consistently outperforms other state-of-the-art approaches by a significant margin. Expand
Occupancy Networks: Learning 3D Reconstruction in Function Space
TLDR
This paper proposes Occupancy Networks, a new representation for learning-based 3D reconstruction methods that encodes a description of the 3D output at infinite resolution without excessive memory footprint, and validate that the representation can efficiently encode 3D structure and can be inferred from various kinds of input. Expand
A volumetric method for building complex models from range images
TLDR
This paper presents a volumetric method for integrating range images that is able to integrate a large number of range images yielding seamless, high-detail models of up to 2.6 million triangles. Expand
Learning Category-Specific Mesh Reconstruction from Image Collections
TLDR
A learning framework for recovering the 3D shape, camera, and texture of an object from a single image by incorporating texture inference as prediction of an image in a canonical appearance space and shows that semantic keypoints can be easily associated with the predicted shapes. Expand
Deep Geometric Prior for Surface Reconstruction
TLDR
This work proposes the use of a deep neural network as a geometric prior for surface reconstruction, and overfit a neural network representing a local chart parameterization to part of an input point cloud using the Wasserstein distance as a measure of approximation. Expand
Differentiable Volumetric Rendering: Learning Implicit 3D Representations Without 3D Supervision
TLDR
This work proposes a differentiable rendering formulation for implicit shape and texture representations, showing that depth gradients can be derived analytically using the concept of implicit differentiation, and finds that this method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes. Expand
Learning Representations and Generative Models for 3D Point Clouds
TLDR
A deep AutoEncoder network with state-of-the-art reconstruction quality and generalization ability is introduced with results that outperform existing methods on 3D recognition tasks and enable shape editing via simple algebraic manipulations. Expand
PointFlow: 3D Point Cloud Generation With Continuous Normalizing Flows
TLDR
A principled probabilistic framework to generate 3D point clouds by modeling them as a distribution of distributions with the invertibility of normalizing flows enables the computation of the likelihood during training and allows the model to train in the variational inference framework. Expand
...
1
2
3
4
5
...