PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations

  title={PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations},
  author={Edgar Tretschk and Ayush Tewari and Vladislav Golyanik and Michael Zollh{\"o}fer and Carsten Stoll and Christian Theobalt},
  booktitle={European Conference on Computer Vision},
Implicit surface representations, such as signed-distance functions, combined with deep learning have led to impressive models which can represent detailed shapes of objects with arbitrary topology. Since a continuous function is learned, the reconstructions can also be extracted at any arbitrary resolution. However, large datasets such as ShapeNet are required to train such models. In this paper, we present a new mid-level patch-based surface representation. At the level of patches, objects… 

Deep Implicit Templates for 3D Shape Representation

Spatial Warping LSTM is proposed, a new 3D shape representation that supports explicit correspondence reasoning in deep implicit representations and can not only learn a common implicit tem-plate for a collection of shapes, but also establish dense correspondences across all the shapes simultaneously with-out any supervision.

Fully Understanding Generic Objects: Modeling, Segmentation, and Reconstruction

This work shows that the complete shape and albedo modeling enables us to leverage real 2D images in both modeling and model fitting, and the effectiveness of this approach is demonstrated through superior 3D reconstruction from a single image, being either synthetic or real, and shape segmentation.

A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation

This work introduces Articulated Signed Distance Functions (A-SDF) to represent articulated shapes with a disentangled latent space, where they have separate codes for encoding shape and articulation, and proposes a Test-Time Adaptation inference algorithm to adjust the model during inference.

Dynamic Surface Function Networks for Clothed Human Bodies

A novel method for temporal coherent reconstruction and tracking of clothed humans using a multi-layer perceptron (MLP) which is embedded into the canonical space of the SMPL body model and can be learned in a self-supervised fashion using the principle of analysis-by-synthesis and differentiable rasterization.

Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Shapes

An efficient neural representation is introduced that enables real-time rendering of high-fidelity neural SDFs, while achieving state-of-the-art geometry reconstruction quality, and is 2–3 orders of magnitude more efficient in terms of rendering speed.

Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction

The approach does not assume sequential input data, thus enabling robust tracking of fast motions or even temporally disconnected recordings, and outperform state-of-the-art non-rigid reconstruction approaches both qualitatively and quantitatively.

SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements

This work deform surface elements based on a human body model such that large-scale deformations caused by articulation are explicitly separated from topological changes and local clothing deformations, and addresses the limitations of existing neural surface elements by regressing local geometry from local features.

Towards Generalising Neural Implicit Representations

This work shows that training neural representations for reconstruction tasks alongside conventional tasks can produce more general encodings that admit equal quality reconstructions to single task training, whilst improving results on conventional tasks when compared to single Task Encodings.

Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces

This paper trains a neural network to pull query 3D locations to their closest neighbors on the surface using the predicted signed distance values and the gradient at the query locations, both of which are computed by the network itself.

Theoretical bounds on data requirements for the ray-based classification

A bound on the number of rays necessary for shape classi-cation, defined by key angular metrics, for arbitrary convex shapes is established and enables a dif-ferent approach for estimating high-dimensional shapes using substantially fewer data elements than volumetric or surface-based approaches.



Local Deep Implicit Functions for 3D Shape

Local Deep Implicit Functions (LDIF), a 3D shape representation that decomposes space into a structured set of learned implicit functions that provides higher surface reconstruction accuracy than the state-of-the-art (OccNet), while requiring fewer than 1\% of the network parameters.

Learning to Infer Implicit Surfaces without 3D Supervision

A novel ray-based field probing technique for efficient image-to-field supervision, as well as a general geometric regularizer for implicit surfaces, which provides natural shape priors in unconstrained regions are proposed.

Implicit Surface Representations As Layers in Neural Networks

This work proposes a novel formulation that permits the use of implicit representations of curves and surfaces, of arbitrary topology, as individual layers in Neural Network architectures with end-to-end trainability, and proposes to represent the output as an oriented level set of a continuous and discretised embedding function.

DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation

This work introduces DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data.

SAL: Sign Agnostic Learning of Shapes From Raw Data

  • Matan AtzmonY. Lipman
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
This paper introduces Sign Agnostic Learning (SAL), a deep learning approach for learning implicit shape representations directly from raw, unsigned geometric data, such as point clouds and triangle soups, and believes it opens the door to many geometric deep learning applications with real-world data.

Occupancy Networks: Learning 3D Reconstruction in Function Space

This paper proposes Occupancy Networks, a new representation for learning-based 3D reconstruction methods that encodes a description of the 3D output at infinite resolution without excessive memory footprint, and validate that the representation can efficiently encode 3D structure and can be inferred from various kinds of input.

Learning Shape Templates With Structured Implicit Functions

It is shown that structured implicit functions are suitable for learning and allow a network to smoothly and simultaneously fit multiple classes of shapes in a general shape template from data.

Learning Implicit Fields for Generative Shape Modeling

  • Zhiqin ChenHao Zhang
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
By replacing conventional decoders by the implicit decoder for representation learning and shape generation, this work demonstrates superior results for tasks such as generative shape modeling, interpolation, and single-view 3D reconstruction, particularly in terms of visual quality.

PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization

The proposed Pixel-aligned Implicit Function (PIFu), an implicit representation that locally aligns pixels of 2D images with the global context of their corresponding 3D object, achieves state-of-the-art performance on a public benchmark and outperforms the prior work for clothed human digitization from a single image.

A Papier-Mache Approach to Learning 3D Surface Generation

This work introduces a method for learning to generate the surface of 3D shapes as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape.