Neural Fields as Learnable Kernels for 3D Reconstruction

@article{Williams2021NeuralFA,
  title={Neural Fields as Learnable Kernels for 3D Reconstruction},
  author={Francis Williams and Zan Gojcic and S. Khamis and Denis Zorin and Joan Bruna and Sanja Fidler and Or Litany},
  journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={18479-18489}
}
We present Neural Kernel Fields: a novel method for reconstructing implicit 3D shapes based on a learned kernel ridge regression. Our technique achieves state-of-the-art results when reconstructing 3D objects and large scenes from sparse oriented points, and can reconstruct shape categories outside the training set with almost no drop in accuracy. The core insight of our approach is that kernel methods are extremely effective for reconstructing shapes when the chosen kernel has an appropriate… 

A Neural Galerkin Solver for Accurate Surface Reconstruction

NeuralGalerkin, a neural Galerkin-method-based solver designed for reconstructing highly-accurate surfaces from the input point clouds, demonstrates its promising reconstruction performance and scalability.

Dual octree graph networks for learning adaptive volumetric shape representations

We present an adaptive deep representation of volumetric fields of 3D shapes and an efficient approach to learn this deep representation for high-quality 3D shape reconstruction and auto-encoding.

ROAD: Learning an Implicit Recursive Octree Auto-Decoder to Efficiently Encode 3D Shapes

A novel recursive representation to accurately encode large datasets of complex 3D shapes by recursively traversing an implicit octree in latent space enabling state-of-the-art reconstruction results at a compression ratio above 99%.

Few 'Zero Level Set'-Shot Learning of Shape Signed Distance Functions in Feature Space

This work combines two types of implicit neural network conditioning mechanisms simultaneously for the first time, namely feature encoding and meta-learning, and shows that in the context of implicit reconstruction from a sparse point cloud, the proposed strategy, i.e. meta- learning in feature space, outperforms existing alternatives, namely standard supervised learning infeature space, and meta -learning in euclidean space, while still providing fast inference.

LION: Latent Point Diffusion Models for 3D Shape Generation

The hierarchical Latent Point Diffusion Model (LION) is introduced, set up as a variational autoencoder (VAE) with a hierarchical latent space that combines a global shape latent representation with a point-structured latent space for 3D shape generation.

ALTO: Alternating Latent Topologies for Implicit 3D Reconstruction

This paper proposes ALTO to sequentially alternate between geometric representations, before converging to an easy-to-decode latent, and shows that this preserves spatial expressiveness and makes decoding lightweight.

Surface Reconstruction from Point Clouds: A Survey and a Benchmark

The present paper contributes a large-scale benchmarking dataset consisting of both synthetic and real-scanned data, and conducts thorough empirical studies by comparing existing methods on the constructed benchmark, and paying special attention on robustness of existing methods against various scanning imperfections.

What's the Situation with Intelligent Mesh Generation: A Survey and Perspectives

Focusing on 110 preliminary IMG methods, an in-depth analysis and evaluation from multiple perspectives is conducted, including the core technique and application scope of the algorithm, agent learning goals, data types, targeting challenges, advantages and limitations.

Real-Time Interpolated Rendering of Terrain Point Cloud Data

This work proposes an alternative approach to point cloud rendering, which addresses the empty space between the points and tries to fill it with appropriate values to achieve the best possible output.

3DShape2VecSet: A 3D Shape Representation for Neural Fields and Generative Diffusion Models

Fig. 1. Left: Shape autoencoding results (surface reconstruction from point clouds) Right: the various down-stream applications of 3DShape2VecSet (from top to down): (a) category-conditioned

References

SHOWING 1-10 OF 71 REFERENCES

Secrets of 3D Implicit Object Shape Reconstruction in the Wild

Two simple yet effective modifications are introduced: a deep encoder that provides a better and more stable initialization for latent code optimization and a deep discriminator that serves as a prior model to boost the fidelity of the shape.

Local Deep Implicit Functions for 3D Shape

Local Deep Implicit Functions (LDIF), a 3D shape representation that decomposes space into a structured set of learned implicit functions that provides higher surface reconstruction accuracy than the state-of-the-art (OccNet), while requiring fewer than 1\% of the network parameters.

Local Implicit Grid Representations for 3D Scenes

This paper introduces Local Implicit Grid Representations, a new 3D shape representation designed for scalability and generality and demonstrates the value of this proposed approach for 3D surface reconstruction from sparse point observations, showing significantly better results than alternative approaches.

Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction

This work introduces Deep Local Shapes (DeepLS), a deep shape representation that enables encoding and reconstruction of high-quality 3D shapes without prohibitive memory requirements, and demonstrates the effectiveness and generalization power of this representation.

Occupancy Networks: Learning 3D Reconstruction in Function Space

This paper proposes Occupancy Networks, a new representation for learning-based 3D reconstruction methods that encodes a description of the 3D output at infinite resolution without excessive memory footprint, and validate that the representation can efficiently encode 3D structure and can be inferred from various kinds of input.

Implicit Geometric Regularization for Learning Shapes

It is observed that a rather simple loss function, encouraging the neural network to vanish on the input point cloud and to have a unit norm gradient, possesses an implicit geometric regularization property that favors smooth and natural zero level set surfaces, avoiding bad zero-loss solutions.

A Point Set Generation Network for 3D Object Reconstruction from a Single Image

This paper addresses the problem of 3D reconstruction from a single image, generating a straight-forward form of output unorthordox, and designs architecture, loss function and learning paradigm that are novel and effective, capable of predicting multiple plausible 3D point clouds from an input image.

3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction

The 3D-R2N2 reconstruction framework outperforms the state-of-the-art methods for single view reconstruction, and enables the 3D reconstruction of objects in situations when traditional SFM/SLAM methods fail (because of lack of texture and/or wide baseline).

Deep Geometric Prior for Surface Reconstruction

This work proposes the use of a deep neural network as a geometric prior for surface reconstruction, and overfit a neural network representing a local chart parameterization to part of an input point cloud using the Wasserstein distance as a measure of approximation.

Neural Splines: Fitting 3D Surfaces with Infinitely-Wide Neural Networks

This work presents Neural Splines, a technique for 3D surface reconstruction that is based on random feature kernels arising from infinitely-wide shallow ReLU networks, and argues that its formulation can be seen as a generalization of cubic spline interpolation to higher dimensions.
...