ABC: A Big CAD Model Dataset for Geometric Deep Learning

  title={ABC: A Big CAD Model Dataset for Geometric Deep Learning},
  author={Sebastian Koch and Albert Matveev and Zhongshi Jiang and Francis Williams and Alexey Artemov and Evgeny Burnaev and Marc Alexa and Denis Zorin and Daniele Panozzo},
  journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
We introduce ABC-Dataset, a collection of one million Computer-Aided Design (CAD) models for research of geometric deep learning methods and applications. Each model is a collection of explicitly parametrized curves and surfaces, providing ground truth for differential quantities, patch segmentation, geometric feature detection, and shape reconstruction. Sampling the parametric descriptions of surfaces and curves allows generating data in different formats and resolutions, enabling fair… 

Figures and Tables from this paper

DEF: Deep Estimation of Sharp Geometric Features in 3D Shapes

This work proposes Deep Estimators of Features (DEFs), a learning-based framework for predicting sharp geometric features in sampled 3D shapes by fusing the result of individual patches to process large 3D models, which are impossible to process for existing data-driven methods due to their size and complexity.

Deep Learning Assisted Optimization for 3D Reconstruction from Single 2D Line Drawings

This paper proposes to train deep neural networks to detect pairwise relationships among geometric entities in the 3D object, and to predict initial depth value of the vertices, by leveraging deep learning in a geometric constraint solving pipeline.

ParSeNet: A Parametric Surface Fitting Network for 3D Point Clouds

A novel, end-to-end trainable, deep network called ParSeNet is proposed that decomposes a 3D point cloud into parametric surface patches, including B-spline patches as well as basic geometric primitives, and allows us to represent surfaces with higher fidelity.

BRepNet: A topological message passing system for solid models

BRepNet, a neural network architecture designed to operate directly on B-rep data structures, avoiding the need to approximate the model as meshes or point clouds, is introduced.

Points2Surf Learning Implicit Surfaces from Point Clouds.

Points2Surf is presented, a novel patch-based learning framework that produces accurate surfaces directly from raw scans without normals at the cost of longer computation times and a slight increase in small-scale topological noise in some cases.

Points2Surf: Learning Implicit Surfaces from Point Cloud Patches

This work presents Points2Surf, a novel patch-based learning framework that produces accurate surfaces directly from raw scans without normals at the cost of longer computation times and a slight increase in small-scale topological noise in some cases.

Fusion 360 Gallery: A Dataset and Environment for Programmatic CAD Reconstruction

This paper provides a dataset of 8,625 designs, comprising sequential sketch and extrude modeling operations, together with a complementary environment called the Fusion 360 Gym, to assist with performing CAD reconstruction and outlines a standard CAD reconstruction task.

DeepCAD: A Deep Generative Network for Computer-Aided Design Models

This work presents the first 3D generative model for a drastically different shape representation— describing a shape as a sequence of computer-aided design (CAD) operations, and proposes a CAD generative network based on the Transformer.

Pvdeconv: Point-Voxel Deconvolution for Autoencoding CAD Construction in 3D

A new dedicated dataset, the CC3D, containing 50k+ pairs of CAD models and their corresponding 3D meshes is introduced, used to learn a convolutional autoencoder for point clouds sampled from the pairs of 3D scans - CAD models.

UV-Net: Learning from Curve-Networks and Solids

A unified representation for parametric curve-networks and solids is proposed by exploiting the u- and uv-parameter domains of curve and surfaces, respectively, to model the geometry, and an adjacency graph to explicitly model the topology.



PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation

This paper designs a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.

Deep Learning for Robust Normal Estimation in Unstructured Point Clouds

The resulting normal estimation method outperforms most of the time the state of the art regarding robustness to outliers, to noise and to point density variation, in the presence of sharp edges, while remaining fast, scaling up to millions of points.

Convolutional neural networks on surfaces via seamless toric covers

This paper presents a method for applying deep learning to sphere-type shapes using a global seamless parameterization to a planar flat-torus, for which the convolution operator is well defined and the standard deep learning framework can be readily applied for learning semantic, high-level properties of the shape.

A benchmark for surface reconstruction

A benchmark for the evaluation and comparison of algorithms which reconstruct a surface from point cloud data is presented and a simple pipeline for measuring surface reconstruction algorithms is proposed, consisting of three main phases: surface modeling, sampling, and evaluation.

ScanNet: Richly-Annotated 3D Reconstructions of Indoor Scenes

This work introduces ScanNet, an RGB-D video dataset containing 2.5M views in 1513 scenes annotated with 3D camera poses, surface reconstructions, and semantic segmentations, and shows that using this data helps achieve state-of-the-art performance on several 3D scene understanding tasks.

3D ShapeNets: A deep representation for volumetric shapes

This work proposes to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network, and shows that this 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.

ShapeNet: An Information-Rich 3D Model Repository

ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy, a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations.

PCPNet Learning Local Shape Properties from Raw Point Clouds

The utility of the PCPNET approach in the context of shape reconstruction is demonstrated, by showing how it can be used to extract normal orientation information from point clouds.

FeaStNet: Feature-Steered Graph Convolutions for 3D Shape Analysis

This work proposes a novel graph-convolution operator to establish correspondences between filter weights and graph neighborhoods with arbitrary connectivity, and obtains excellent experimental results that significantly improve over previous state-of-the-art shape correspondence results.

Dense Human Body Correspondences Using Convolutional Networks

This work uses a deep convolutional neural network to train a feature descriptor on depth map pixels, but crucially, rather than training the network to solve the shape correspondence problem directly, it trains it to solve a body region classification problem, modified to increase the smoothness of the learned descriptors near region boundaries.