• Corpus ID: 232075990

Disentangling Geometric Deformation Spaces in Generative Latent Shape Models

@article{AumentadoArmstrong2021DisentanglingGD,
  title={Disentangling Geometric Deformation Spaces in Generative Latent Shape Models},
  author={Tristan Aumentado-Armstrong and Stavros Tsogkas and Sven J. Dickinson and Allan D. Jepson},
  journal={ArXiv},
  year={2021},
  volume={abs/2103.00142}
}
A complete representation of 3D objects requires characterizing the space of deformations in an interpretable manner, from articulations of a single instance to changes in shape across categories. In this work, we improve on a prior generative model of geometric disentanglement for 3D shapes, wherein the space of object geometry is factorized into rigid orientation, non-rigid pose, and intrinsic shape. The resulting model can be trained from raw 3D shapes, without correspondences, labels, or… 
Neural Human Deformation Transfer
TLDR
This work proposes a neural encoder-decoder architecture where only identity information is encoded and where the decoder is conditioned on the pose, and uses pose independent representations, such as isometry-invariant shape characteristics, to represent identity features.

References

SHOWING 1-10 OF 90 REFERENCES
Geometric Disentanglement for Generative Latent Shape Models
TLDR
This paper proposes an unsupervised approach to partitioning the latent space of a variational autoencoder for 3D point clouds in a natural way, using only geometric information, that builds upon prior work utilizing generative adversarial models of point sets.
Unsupervised Shape and Pose Disentanglement for 3D Meshes
TLDR
A combination of self-consistency and cross- Consistency constraints to learn pose and shape space from registered meshes and incorporates as-rigid-as-possible deformation(ARAP) into the training loop to avoid degenerate solutions.
3D-CODED: 3D Correspondences by Deep Deformation
TLDR
This work presents a new deep learning approach for matching deformable shapes by introducing Shape Deformation Networks which jointly encode 3D shapes and correspondences, and shows that this method is robust to many types of perturbations, and generalizes to non-human shapes.
Automatic unpaired shape deformation transfer
TLDR
This work proposes a novel approach to automatic deformation transfer between two unpaired shape sets without correspondences and shows that this fully automatic method is able to obtain high-quality deformationTransfer results with unpaired data sets, comparable or better than existing methods where strict correspondences are required.
CubeNet: Equivariance to 3D Rotation and Translation
TLDR
A Group Convolutional Neural Network with linear equivariance to translations and right angle rotations in three dimensions is introduced, and is believed to be the first 3D rotation equivariant CNN for voxel representations.
Latent feature disentanglement for 3D meshes
TLDR
This paper introduces a supervised generative 3D mesh model that disentangles the latent shape representation into independent generative factors and shows that learning an explicitly disentangled representation can both improve random shape generation as well as successfully address downstream tasks such as pose and shape transfer, shape-invariant temporal synchronization, and pose- Invariant shape matching.
Learning Representations and Generative Models for 3D Point Clouds
TLDR
A deep AutoEncoder network with state-of-the-art reconstruction quality and generalization ability is introduced with results that outperform existing methods on 3D recognition tasks and enable shape editing via simple algebraic manipulations.
Functional Characterization of Intrinsic and Extrinsic Geometry
TLDR
A novel way to capture and characterize distortion between pairs of shapes by extending the recently proposed framework of shape differences built on functional maps, and demonstrates that a set of four operators is complete, capturing intrinsic and extrinsic structure and fully encoding a shape up to rigid motion in both discrete and continuous settings.
Towards 3D Rotation Invariant Embeddings
Obtaining rotation invariant embedding of a shape is very useful in many tasks such as shape comparison, classification and segmentation. In this work, we create a neural network architecture that
Endowing Deep 3d Models With Rotation Invariance Based On Principal Component Analysis
TLDR
This paper proposes to endow deep 3D models with rotation invariance by expressing the coordinates in an intrinsic frame determined by the object shape itself, and adopts the coordinates expressed in all intrinsic frames as inputs to obtain multiple output features, which will be aggregated as a final feature via a self-attention module.
...
1
2
3
4
5
...