ShapeFormer: Transformer-based Shape Completion via Sparse Representation

  title={ShapeFormer: Transformer-based Shape Completion via Sparse Representation},
  author={Xingguang Yan and Liqiang Lin and Niloy Jyoti Mitra and Dani Lischinski and Daniel Cohen-Or and Hui Huang},
  journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
We present ShapeFormer, a transformer-based network that produces a distribution of object completions, conditioned on incomplete, and possibly noisy, point clouds. The resultant distribution can then be sampled to generate likely completions, each exhibiting plausible shape details while being faithful to the input. To facilitate the use of transformers for 3D, we introduce a compact 3D representation, vector quantized deep implicit function (VQDIF), that utilizes spatial sparsity to represent… 

3DILG: Irregular Latent Grids for 3D Generative Modeling

In the context of shape reconstruction from point clouds, the shape representation built on irregular grids improves upon grid-based methods in terms of reconstruction accuracy and promotes high-quality shape generation using auto-regressive probabilistic models.

AutoSDF: Shape Priors for 3D Completion, Reconstruction and Generation

This paper proposes an au-toregressive prior for 3D shapes to solve multimodal 3D tasks such as shape completion, reconstruction, and gener-ation and shows that the proposed method outperforms the specialized state-of-the-art methods trained for individual tasks.

Autoregressive 3D Shape Generation via Canonical Mapping

The key idea is to decompose point clouds of one category into semantically aligned sequences of shape compositions, via a learned canonical space, which can then be quantized and used to learn a context-rich composition codebook for point cloud generation.

PatchComplete: Learning Multi-Resolution Patch Priors for 3D Shape Completion on Unseen Categories

PatchComplete is proposed, which learns effective shape priors based on multi-resolution local patches, which are often more general than full shapes and thus enable geometric reasoning about unseen class categories and enable reconstruction on entirely unseen categories at test time.

NeuralODF: Learning Omnidirectional Distance Fields for 3D Shape Representation

Experiments demonstrate that NeuralODF can learn to capture high-quality shape by overfitting to a single object, and also learn to generalize on common shape categories, and the core Jumping Cubes and the recursive marching algorithms are described.

TSCom-Net: Coarse-to-Fine 3D Textured Shape Completion Network

A new neural network architecture for 3D body shape and high-resolution texture completion – TSCom-Net – that can reconstruct the full geometry from mid-level to high-level partial input scans and inpaint the missing parts of the partial ‘texture atlas’.

Neural Wavelet-domain Diffusion for 3D Shape Generation

A compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets is proposed, enabling direct generative modeling on a continuous implicit representation in wavelet domain.

XDGAN: Multi-Modal 3D Shape Generation in 2D Space

This paper proposes XDGAN, an effective and fast method for applying 2D image GAN architectures to the generation of 3D object geometry combined with additional surface attributes, like color textures and normals, and shows both quantitatively and qualitatively that it is highly effective at various tasks such as 3D shape generation, single view reconstruction and shape manipulation.

ShapeCrafter: A Recursive Text-Conditioned 3D Shape Generation Model

The method supports shape editing, extrapolation, and can enable new applications in human–machine collaboration for creative design, and build upon vector-quantized deep implicit functions that generate a distribution of high-quality shapes.

VQ-DcTr: Vector-Quantized Autoencoder With Dual-channel Transformer Points Splitting for 3D Point Cloud Completion

Existing point cloud completion methods mainly utilize the global shape representation to recover the missing regions of the 3D shape from the partial point cloud. However, these methods learn the



Local Implicit Grid Representations for 3D Scenes

This paper introduces Local Implicit Grid Representations, a new 3D shape representation designed for scalability and generality and demonstrates the value of this proposed approach for 3D surface reconstruction from sparse point observations, showing significantly better results than alternative approaches.

AutoSDF: Shape Priors for 3D Completion, Reconstruction and Generation

This paper proposes an au-toregressive prior for 3D shapes to solve multimodal 3D tasks such as shape completion, reconstruction, and gener-ation and shows that the proposed method outperforms the specialized state-of-the-art methods trained for individual tasks.

Multimodal Shape Completion via IMLE

This work proposes a novel multimodal shape completion technique that is effectively able to learn a one-to-many mapping and generates diverse complete shapes and shows that the method is superior to alternatives in terms of completeness and diversity of shapes.

Local Deep Implicit Functions for 3D Shape

Local Deep Implicit Functions (LDIF), a 3D shape representation that decomposes space into a structured set of learned implicit functions that provides higher surface reconstruction accuracy than the state-of-the-art (OccNet), while requiring fewer than 1\% of the network parameters.

DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation

This work introduces DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data.

PCN: Point Completion Network

The experiments show that PCN produces dense, complete point clouds with realistic structures in the missing regions on inputs with various levels of incompleteness and noise, including cars from LiDAR scans in the KITTI dataset.

Deformable Shape Completion with Graph Convolutional Autoencoders

This work proposes a novel learning-based method for the completion of partial shapes using a variational autoencoder with graph convolutional operations that learns a latent space for complete realistic shapes that best fits the generated shape to the known partial input.

Multiresolution Deep Implicit Functions for 3D Shape Representation

Multiresolution Deep Implicit Functions (MDIF) is introduced, a hierarchical representation that can recover fine geometry detail, while being able to perform global operations such as shape completion and support both encoder-decoder inference and decoder-only latent optimization.

3D Shape Generation and Completion through Point-Voxel Diffusion

Point-Voxel Diffusion is a unified, probabilistic formulation for unconditional shape generation and conditional, multi-modal shape completion that marries denoising diffusion models with the hybrid, pointvoxel representation of 3D shapes.

Unsupervised 3D Shape Completion through GAN Inversion

The proposed ShapeInversion uses a GAN pre-trained on complete shapes by searching for a latent code that gives a complete shape that best reconstructs the given partial input, and is capable of incorporating the rich prior captured in a well-trained generative model.