Exploring differential geometry in neural implicits

  title={Exploring differential geometry in neural implicits},
  author={Tiago Novello and Guilherme Gonçalves Schardong and Luiz Schirmer and Vin{\'i}cius da Silva and H{\'e}lio Lopes and Luiz Velho},
  journal={Comput. Graph.},

Neural Implicit Surface Evolution using Differential Equations

This work investigates the use of smooth neural networks for modeling dynamic variations of implicit surfaces under partial differential equations (PDE). For this purpose, it extends the

Neural Implicit Mapping via Nested Neighborhoods

The neural normal mapping transfers details from a neural SDF to a surface nested on a neighborhood of its zero-level set, and it can be used to fetch smooth normals for discrete surfaces such as meshes and to skip later iterations when sphere tracing level sets.

ORCa: Glossy Objects as Radiance Field Cameras

It is shown that recovering the environment radiance enables depth and radiance estimation from the object to its surroundings in addition to beyond field-of-view novel-view synthesis, i.e. rendering of novel views that are only directly-visible to the glossy object present in the scene, but not the observer.

Understanding Sinusoidal Neural Networks

It is proved that the composition of sinusoidal layers expands as a sum of sines consisting of a large number of new frequencies given by linear combinations of the weights of the network’s first layer.



Neural Implicit Surfaces in Higher Dimension

This work investigates the use of neural networks admitting high-order derivatives for modeling dynamic variations of smooth implicit surfaces. For this purpose, it extends the representation of

Implicit Geometric Regularization for Learning Shapes

It is observed that a rather simple loss function, encouraging the neural network to vanish on the input point cloud and to have a unit norm gradient, possesses an implicit geometric regularization property that favors smooth and natural zero level set surfaces, avoiding bad zero-loss solutions.

Implicit Surface Representations As Layers in Neural Networks

This work proposes a novel formulation that permits the use of implicit representations of curves and surfaces, of arbitrary topology, as individual layers in Neural Network architectures with end-to-end trainability, and proposes to represent the output as an oriented level set of a continuous and discretised embedding function.

MIP-plicits: Level of Detail Factorization of Neural Implicits Sphere Tracing

We introduce MIP-plicits, a novel approach for rendering 3D and 4D Neural Implicits that divide the problem into macro and meso components. We rely on the iterative nature of the sphere tracing

Geometry Processing with Neural Fields

This paper proposes the use of neural fields for geometry processing, and introduces loss functions and architectures to show that some of the most challenging geometry processing tasks, such as deformation and filtering, can be done with neural fields.

Discrete Differential-Geometry Operators for Triangulated 2-Manifolds

A unified and consistent set of flexible tools to approximate important geometric attributes, including normal vectors and curvatures on arbitrary triangle meshes, using averaging Voronoi cells and the mixed Finite-Element/Finite-Volume method is proposed.

Implicit Neural Representations with Periodic Activation Functions

This work proposes to leverage periodic activation functions for implicit neural representations and demonstrates that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives.

BSP-Net: Generating Compact Meshes via Binary Space Partitioning

BSP-Net is a network that learns to represent a 3D shape via convex decomposition, which is unsupervised since no convex shape decompositions are needed for training and the reconstruction quality is competitive with state-of-the-art methods while using much fewer primitives.

Reconstruction and representation of 3D objects with radial basis functions

It is shown that the RBF representation has advantages for mesh simplification and remeshing applications, and a greedy algorithm in the fitting process reduces the number of RBF centers required to represent a surface and results in significant compression and further computational advantages.

CvxNet: Learnable Convex Decomposition

This work introduces a network architecture to represent a low dimensional family of convexes, automatically derived via an auto-encoding process, and investigates the applications including automatic convex decomposition, image to 3D reconstruction, and part-based shape retrieval.