Motion Annotation Programs: A Scalable Approach to Annotating Kinematic Articulations in Large 3D Shape Collections

@article{Xu2020MotionAP,
  title={Motion Annotation Programs: A Scalable Approach to Annotating Kinematic Articulations in Large 3D Shape Collections},
  author={Xianghao Xu and David Charatan and Sonia Raychaudhuri and Hanxiao Jiang and Mae Heitmann and Vladimir G. Kim and Siddhartha Chaudhuri and Manolis Savva and Angel X. Chang and Daniel Ritchie},
  journal={2020 International Conference on 3D Vision (3DV)},
  year={2020},
  pages={613-622}
}
  • Xianghao XuDavid Charatan Daniel Ritchie
  • Published 1 November 2020
  • Computer Science
  • 2020 International Conference on 3D Vision (3DV)
3D models of real-world objects are essential for many applications, including the creation of virtual environments for AI training. To mimic real-world objects in these applications, objects must be annotated with their kinematic mobilities. Annotating kinematic motions is time-consuming, and it is not well-suited to typical crowdsourcing workflows due to the significant domain expertise required. In this paper, we present a system that helps individual expert users rapidly annotate kinematic… 

Unsupervised Kinematic Motion Detection for Part-segmented 3D Shape Collections

This paper presents an unsupervised approach for discovering articulated motions in a part-segmented 3D shape collection based on a concept the authors call category closure, and operationalizes this concept with an algorithm that optimizes a shape’s part motion parameters such that it can transform into other shapes in the collection.

Learning to Infer Kinematic Hierarchies for Novel Object Instances

This work presents a novel perception system that infers the moving parts of an object and the kinematic couplings that relate them, and uses a graph neural network to predict the existence, direction, and type of edges that relate the inferred parts.

Where2Act: From Pixels to Actions for Articulated 3D Objects

This paper proposes a learning-from-interaction framework with an online data sampling strategy that allows to train the network in simulation (SAPIEN) and generalizes across categories and proposes, discusses, and evaluates novel network architectures that given image and depth data, predict the set of actions possible at each pixel, and the regions over articulated parts that are likely to move under the force.

OPD: Single-view 3D Openable Part Detection

O PD R CNN is designed, a neural architecture that detects openable parts and predicts their motion parameters and outperforms baselines and prior work especially for RGB image inputs.

References

SHOWING 1-10 OF 30 REFERENCES

A scalable active framework for region annotation in 3D shape collections

This work proposes a novel active learning method capable of enriching massive geometric datasets with accurate semantic region annotations, and demonstrates that incorporating verification of all produced labelings within this unified objective improves both accuracy and efficiency of the active learning procedure.

A Robust 3D-2D Interactive Tool for Scene Segmentation and Annotation

This paper aims to build a robust annotation tool that effectively and conveniently enables the segmentation and annotation of massive 3D data, and works by coupling 2D and 3D information via an interactive framework, through which users can provide high-level semantic annotation for objects.

Strong supervision from weak annotation: Interactive training of deformable part models

It is demonstrated that the proposed framework for large scale learning and annotation of structured models can be used to efficiently and robustly train part and pose detectors on the CUB Birds-200-a challenging dataset of birds in unconstrained pose and environment.

ShapeNet: An Information-Rich 3D Model Repository

ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy, a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations.

Shape2Motion: Joint Analysis of Motion Parts and Attributes From 3D Shapes

Shape2Motion is comprised of two deep neural networks designed for mobility proposal generation and mobility optimization, respectively which takes a single 3D point cloud as input, and jointly computes a mobility-oriented segmentation and the associated motion attributes.

Best of both worlds: Human-machine collaboration for object annotation

This paper empirically validate the effectiveness of the human-in-the-loop labeling approach on the ILSVRC2014 object detection dataset and seamlessly integrates multiple computer vision models with multiple sources of human input in a Markov Decision Process.

Mobility Fitting using 4D RANSAC

This work presents an algorithm that robustly computes the joints representing the dynamics of a scanned articulated object by by‐passing the reconstruction of the underlying surface geometry and directly solving for motion joints.

Deep part induction from articulated object pairs

This paper explores how the observation of different articulation states provides evidence for part structure and motion of 3D objects, and learns a neural network architecture with three modules that respectively propose correspondences, estimate 3D deformation flows, and perform segmentation.

Mobility‐Trees for Indoor Scenes Manipulation

This work analyzes the reoccurrence of objects in the scene and automatically detect their functional mobilities, and introduces the ‘mobility‐tree’ construct for high‐level functional representation of complex 3D indoor scenes.

3D Semantic Parsing of Large-Scale Indoor Spaces

This paper argues that identification of structural elements in indoor spaces is essentially a detection problem, rather than segmentation which is commonly used, and proposes a method for semantic parsing the 3D point cloud of an entire building using a hierarchical approach.