Joint Learning of 3D Shape Retrieval and Deformation

@article{Uy2021JointLO,
  title={Joint Learning of 3D Shape Retrieval and Deformation},
  author={Mikaela Angelina Uy and Vladimir G. Kim and Minhyuk Sung and Noam Aigerman and Siddhartha Chaudhuri and Leonidas J. Guibas},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={11708-11717}
}
We propose a novel technique for producing high-quality 3D models that match a given target object image or scan. Our method is based on retrieving an existing shape from a database of 3D models and then deforming its parts to match the target shape. Unlike previous approaches that independently focus on either shape retrieval or deformation, we propose a joint learning procedure that simultaneously trains the neural deformation module along with the embedding space used by the retrieval module… 

ANISE: Assembly-based Neural Implicit Surface rEconstruction

We present ANISE, a method that reconstructs a 3D shape from partial observations (images or sparse point clouds) using a part-aware neural implicit shape representation. It is formulated as an

Structure-Aware 3D VR Sketch to 3D Shape Retrieval

This work proposes to use a triplet loss with an adaptive margin value driven by a ‘fitting gap’, which is the similarity of two shapes under structure-preserving deformations, to evaluate the structural similarity of shapes in 3D shape retrieval.

Reconstructing editable prismatic CAD from rounded voxel models

This work introduces a novel neural network architecture to approximate a smoothed signed distance function with an editable, constrained, prismatic CAD model and outputs highly editable constrained parametric sketches which are compatible with existing CAD software.

PatchRD: Detail-Preserving Shape Completion by Learning Patch Retrieval and Deformation

A data-driven shape completion approach that focuses on completing geometric details of missing regions of 3D shapes by copy and deform patches from the partial input to complete missing regions, to preserve the style of local geometric features.

Accurate Instance-Level CAD Model Retrieval in a Large-Scale Database

Evaluation on a real-world dataset shows that the geometry-based re-ranking is a conceptually simple but highly effective method that can lead to a significant improvement in retrieval accuracy compared to the state-of-the-art.

Neural Template: Topology-aware Reconstruction and Disentangled Generation of 3D Meshes

A novel framework called DTNet for 3D mesh reconstruction and generation via Disentangled Topology is introduced, which learns a topology-aware neural template specific to each input then deform the template to reconstruct a detailed mesh while preserving the learned topology.

SPAGHETTI: Editing Implicit Shapes Through Part Aware Generation

The architecture allows for manipulation of implicit shapes by means of transforming, interpolating and combining shape segments together, without requiring explicit part supervision, which enables a generative framework with part-level control.

DEEPMLS: GEOMETRY-AWARE CONTROL POINT DEFORMATION

DeepMLS is introduced, a space-based deformation technique, guided by a set of displaced control points, that leverages the power of neural networks to inject the underlying shape geometry into the deformation parameters, to enable a realistic and intuitive shape deformation.

NeuralMLS: Geometry-Aware Control Point Deformation

NerualMLS is introduced, a space-based deformation technique, guided by a set of displaced control points, that leverages the power of neural networks to inject the underlying shape geometry into the deformation parameters, and exploits the innate smoothness of neural Networks.

Intuitive Shape Editing in Latent Space

This autoencoder-based method enables intuitive shape editing in latent space by disentangling latent sub-spaces into style variables and control points on the surface that can be manipulated independently.

References

SHOWING 1-10 OF 55 REFERENCES

Deformation-Aware 3D Model Embedding and Retrieval

This work introduces a new problem of retrieving 3D models that are deformable to a given query shape and presents a novel deep deformation-aware embedding to solve this retrieval task and proposes two strategies for training the embedding network.

DeformNet: Free-Form Deformation Network for 3D Shape Reconstruction from a Single Image

The Free-Form Deformation layer is a powerful new building block for Deep Learning models that manipulate 3D data and DEFORMNET uses this FFD layer combined with shape retrieval for smooth and detail-preserving 3D reconstruction of qualitatively plausible point clouds with respect to a single query image.

Learning Semantic Deformation Flows with 3D Convolutional Networks

This work introduces an end-to-end solution to shape deformation using a volumetric Convolutional Neural Network (CNN) that learns deformation flows in 3D that achieves comparable results with state of the art methods when applied to CAD models.

Joint Embedding of 3D Scan and CAD Objects

A new 3D CNN-based approach to learn a joint embedding space representing object similarities across these domains, and introduces a new dataset of ranked scan-CAD similarity annotations, enabling new, fine-grained evaluation of CAD model retrieval to cluttered, noisy, partial scans.

Joint embeddings of shapes and images via CNN image purification

A joint embedding space populated by both 3D shapes and 2D images of objects, where the distances between embedded entities reflect similarity between the underlying objects, which facilitates comparison between entities of either form, and allows for cross-modality retrieval.

Learning Free-Form Deformations for 3D Object Reconstruction

This paper proposes a method to learn free-form deformations (FFD) for the task of 3D reconstruction from a single image and achieves state-of-the-art results on point-cloud and volumetric metrics.

Shape Completion Using 3D-Encoder-Predictor CNNs and Shape Synthesis

A data-driven approach to complete partial 3D shapes through a combination of volumetric deep neural networks and 3D shape synthesis and a 3D-Encoder-Predictor Network (3D-EPN) which is composed of 3D convolutional layers.

Pixel2Mesh++: Multi-View 3D Mesh Generation via Deformation

This model learns to predict series of deformations to improve a coarse shape iteratively and exhibits generalization capability across different semantic categories, number of input images, and quality of mesh initialization.

DeformSyncNet: Deformation Transfer via Synchronized Shape Deformation Spaces

DeformSyncNet is a new approach that allows consistent and synchronized shape deformations without requiring explicit correspondence information, and achieves this by encoding deformations into a class-specific idealized latent space while decoding them into an individual, model-specific linear deformation action space, operating directly in 3D.

Neural Cages for Detail-Preserving 3D Deformations

The method extends a traditional cage-based deformation technique, where the source shape is enclosed by a coarse control mesh termed cage, and translations prescribed on the cage vertices are interpolated to any point on the source mesh via special weight functions.
...