Self-Supervised Contrastive Representation Learning for 3D Mesh Segmentation

@article{Haque2022SelfSupervisedCR,
  title={Self-Supervised Contrastive Representation Learning for 3D Mesh Segmentation},
  author={Ayaan Haque and Hankyu Moon and Heng Hao and Sima Didari and Jae Oh Woo and Patrick D. Bangert},
  journal={ArXiv},
  year={2022},
  volume={abs/2208.04278}
}
3D deep learning is a growing field of interest due to the vast amount of information stored in 3D formats. Triangular meshes are an efficient representation for irregular, non- uniform 3D objects. However, meshes are often challenging to annotate due to their high geometrical complexity. Specifically, creating segmentation masks for meshes is tedious and time-consuming. Therefore, it is desirable to train segmentation networks with limited-labeled data. Self-supervised learning (SSL), a form of… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 21 REFERENCES

Deep Learning based 3D Segmentation: A Survey

A comprehensive survey of recent progress in deep learning based 3D segmentation covering over 150 papers is provided, which summarizes the most commonly used pipelines, discusses their highlights and shortcomings, and analyzes the competitive results of these segmentation methods.

NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation

This work proposes to integrate deep neural networks with 3D generative representations of objects into a unified neural architecture that is termed NeMo, which learns a generative model of neural feature activations at each vertex on a dense 3D mesh.

Convolutional neural networks on surfaces via seamless toric covers

This paper presents a method for applying deep learning to sphere-type shapes using a global seamless parameterization to a planar flat-torus, for which the convolution operator is well defined and the standard deep learning framework can be readily applied for learning semantic, high-level properties of the shape.

Contrastive Representation Learning for Hand Shape Estimation

This work presents improvements in monocular hand shape estimation by building on top of recent advances in unsupervised learning. We extend momentum contrastive learning and contribute a structured

Unsupervised Shape and Pose Disentanglement for 3D Meshes

A combination of self-consistency and cross- Consistency constraints to learn pose and shape space from registered meshes and incorporates as-rigid-as-possible deformation(ARAP) into the training loop to avoid degenerate solutions.

MeshCNN: a network with an edge

This paper utilizes the unique properties of the mesh for a direct analysis of 3D shapes using MeshCNN, a convolutional neural network designed specifically for triangular meshes, and demonstrates the effectiveness of MeshCNN on various learning tasks applied to 3D meshes.

Unsupervised Pre-Training of Image Features on Non-Curated Data

This work proposes a new unsupervised approach which leverages self-supervision and clustering to capture complementary statistics from large-scale data and validates its approach on 96 million images from YFCC100M, achieving state-of-the-art results among unsuper supervised methods on standard benchmarks.

Unsupervised Learning of Visual Features by Contrasting Cluster Assignments

This paper proposes an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons, and uses a swapped prediction mechanism where it predicts the cluster assignment of a view from the representation of another view.

A Simple Framework for Contrastive Learning of Visual Representations

It is shown that composition of data augmentations plays a critical role in defining effective predictive tasks, and introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning.

An Empirical Study of Training Self-Supervised Vision Transformers

This work investigates the effects of several fundamental components for training self-supervised ViT, and reveals that these results are indeed partial failure, and they can be improved when training is made more stable.