Learning Implicit Fields for Generative Shape Modeling

@article{Chen2018LearningIF,
  title={Learning Implicit Fields for Generative Shape Modeling},
  author={Zhiqin Chen and Hao Zhang},
  journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2018},
  pages={5932-5941}
}
  • Zhiqin ChenHao Zhang
  • Published 6 December 2018
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
We advocate the use of implicit fields for learning generative models of shapes and introduce an implicit field decoder, called IM-NET, for shape generation, aimed at improving the visual quality of the generated shapes. [] Key Method Specifically, it takes a point coordinate, along with a feature vector encoding a shape, and outputs a value which indicates whether the point is outside the shape or not. By replacing conventional decoders by our implicit decoder for representation learning (via IM-AE) and…

Figures and Tables from this paper

Learning Manifold Patch-Based Representations of Man-Made Shapes

This work proposes a new representation that is usable in conventional CAD modeling pipelines and can also be learned by deep neural networks, and demonstrates the benefits of the representation by applying it to the task of sketch-based modeling.

Augmenting Implicit Neural Shape Representations with Explicit Deformation Fields

This paper proposes to pair the implicit representation of the shapes with an explicit, piecewise linear deformation field, learned as an auxiliary function, and demonstrates that, by regularizing these deformation fields, it can encourage the implicit neural representation to induce natural deformations in the learned shape space.

WEIGHT-ENCODED NEURAL IMPLICIT 3D SHAPES

  • Computer Science
  • 2020
It is established that weight-encoded neural implicits meet the criteria of a first-class 3D shape representation and a suite of technical contributions to improve reconstruction accuracy, convergence, and robustness when learning the signed distance field induced by a polygonal mesh is introduced.

Learning Implicit Functions for Dense 3D Shape Correspondence of Generic Objects

  • Feng LiuXiaoming Liu
  • Computer Science
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2023
The objective of this paper is to learn dense 3D shape correspondence for topology-varying generic objects in an unsupervised manner by implementing dense correspondence through an inverse function mapping from the part embedding vector to a corresponded 3D point.

Learning to Generate 3D Shapes from a Single Example

This paper presents a multi-scale GAN-based model designed to capture the input shape's geometric features across a range of spatial scales, and builds the generator atop the tri-plane hybrid representation, which requires only 2D convolutions.

SP-GAN

SP-GAN is a new unsupervised sphere-guided generative model for direct synthesis of 3D shapes in the form of point clouds that incorporates a global prior to spatially guide the generative process and attaches a local prior to each sphere point to provide local details.

Semantics-guided Exploration of Latent Spaces for Shape Synthesis

An approach to incorporate user guidance into shape synthesis approaches based on deep networks with the introduction of a label regression neural network coupled with a shape synthesis neural network to allow users to start an exploratory process of the shape space with the use of high-level semantic keywords.

Deep Implicit Templates for 3D Shape Representation

Spatial Warping LSTM is proposed, a new 3D shape representation that supports explicit correspondence reasoning in deep implicit representations and can not only learn a common implicit tem-plate for a collection of shapes, but also establish dense correspondences across all the shapes simultaneously with-out any supervision.

Learning Implicit Functions for Topology-Varying Dense 3D Shape Correspondence

This paper implements dense correspondence through an inverse function mapping from the part embedding to a corresponded 3D point, which is assumed to be similar to its densely corresponded point in another 3D shape of the same object category.

Learning Category-level Shape Saliency via Deep Implicit Surface Networks

It is shown that by leveraging the learned shape saliency, the method is able to reconstruct either category-salient or instance-specific parts of object surfaces; semantic representativeness of the learned saliency is also reflected in its efficacy to guide the selection of surface points for better point cloud classification.
...

References

SHOWING 1-10 OF 52 REFERENCES

SurfNet: Generating 3D Shape Surfaces Using Deep Residual Networks

This work develops a procedure to create consistent shape surface of a category of 3D objects, and uses this consistent representation for category-specific shape surface generation from a parametric representation or an image by developing novel extensions of deep residual networks for the task of geometry image generation.

Synthesizing 3D Shapes via Modeling Multi-view Depth Maps and Silhouettes with Deep Generative Networks

This work takes an alternative approach to the problem of learning generative models of 3D shapes: learning a generative model over multi-view depth maps or their corresponding silhouettes, and using a deterministic rendering function to produce3D shapes from these images.

Learning Shape Priors for Single-View 3D Completion and Reconstruction

The proposed ShapeHD pushes the limit of single-view shape completion and reconstruction by integrating deep generative models with adversarially learned shape priors, penalizing the model only if its output is unrealistic, not if it deviates from the ground truth.

Learning Representations and Generative Models for 3D Point Clouds

A deep AutoEncoder network with state-of-the-art reconstruction quality and generalization ability is introduced with results that outperform existing methods on 3D recognition tasks and enable shape editing via simple algebraic manipulations.

AtlasNet: A Papier-M\^ach\'e Approach to Learning 3D Surface Generation

A method for learning to generate the surface of 3D shapes as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape.

DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation

This work introduces DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data.

Deep Learning 3D Shape Surfaces Using Geometry Images

This work qualitatively and quantitatively validate that creating geometry images using authalic parametrization on a spherical domain is suitable for robust learning of 3D shape surfaces, and proposes a way to implicitly learn the topology and structure of3D shapes using geometry images encoded with suitable features.

GRASS: Generative Recursive Autoencoders for Shape Structures

A novel neural network architecture for encoding and synthesis of 3D shapes, particularly their structures, is introduced and it is demonstrated that without supervision, the network learns meaningful structural hierarchies adhering to perceptual grouping principles, produces compact codes which enable applications such as shape classification and partial matching, and supports shape synthesis and interpolation with significant variations in topology and geometry.

A Point Set Generation Network for 3D Object Reconstruction from a Single Image

This paper addresses the problem of 3D reconstruction from a single image, generating a straight-forward form of output unorthordox, and designs architecture, loss function and learning paradigm that are novel and effective, capable of predicting multiple plausible 3D point clouds from an input image.

Multi-view Convolutional Neural Networks for 3D Shape Recognition

This work presents a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and shows that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art3D shape descriptors.
...