• Corpus ID: 219792402

UV-Net: Learning from Curve-Networks and Solids

  title={UV-Net: Learning from Curve-Networks and Solids},
  author={Pradeep Kumar Jayaraman and Aditya Sanghi and J. Lambourne and T. Davies and Hooman Shayani and Nigel Morris},
Parametric curves, surfaces and boundary representations are the basis for 2D vector graphics and 3D industrial designs. Despite their prevalence, there exists limited research on applying modern deep neural networks directly to such representations. The unique challenges in working with such representations arise from the combination of continuous non-Euclidean geometry domain and discrete topology, as well as a lack of labeled datasets, benchmarks and baseline models. In this paper, we… 

Figures and Tables from this paper

BRepNet: A topological message passing system for solid models
BRepNet, a neural network architecture designed to operate directly on B-rep data structures, avoiding the need to approximate the model as meshes or point clouds, is introduced.
DeepCAD: A Deep Generative Network for Computer-Aided Design Models
This work presents the first 3D generative model for a drastically different shape representation— describing a shape as a sequence of computer-aided design (CAD) operations, and proposes a CAD generative network based on the Transformer.
Im2Vec: Synthesizing Vector Graphics without Vector Supervision
A new neural network is proposed that can generate complex vector graphics with varying topologies, and only requires indirect supervision from readily-available raster training images (i.e., with no vector counterparts).
Fusion 360 Gallery: A Dataset and Environment for Programmatic CAD Reconstruction
This paper provides a dataset of 8,625 designs, comprising sequential sketch and extrude modeling operations, together with a complementary environment called the Fusion 360 Gym, to assist with performing CAD reconstruction and outlines a standard CAD reconstruction task.
SketchGen: Generating Constrained CAD Sketches
SketchGen is proposed as a generative model based on a transformer architecture to address the heterogeneity problem by carefully designing a sequential language for the primitives and constraints that allows distinguishing between different primitive or constraint types and their parameters, while encouraging the model to re-use information across related parameters, encoding shared structure.
Fusion 360 Gallery: A Dataset and Environment for Programmatic CAD Construction from Human Design Sequences
The Fusion 360 Gallery is presented, consisting of a simple language with just the sketch and extrude modeling operations, and a dataset of 8,625 human design sequences expressed in this language, and an interactive environment called the Fusion 360 Gym, which exposes the sequential construction of a CAD program as a Markov decision process, making it amendable to machine learning approaches.


Deep Learning 3D Shape Surfaces Using Geometry Images
This work qualitatively and quantitatively validate that creating geometry images using authalic parametrization on a spherical domain is suitable for robust learning of 3D shape surfaces, and proposes a way to implicitly learn the topology and structure of3D shapes using geometry images encoded with suitable features.
Shape Reconstruction by Learning Differentiable Surface Representations
This paper shows that the inherent differentiability of deep networks can be exploited to leverage differential surface properties during training so as to prevent patch collapse and strongly reduce patch overlap, and this lets us reliably compute quantities such as surface normals and curvatures.
ABC: A Big CAD Model Dataset for Geometric Deep Learning
This work performs a large-scale benchmark for estimation of surface normals, comparing existing data driven methods and evaluating their performance against both the ground truth and traditional normal estimation methods.
DeepSpline: Data-Driven Reconstruction of Parametric Curves and Surfaces
This work proposes a deep learning architecture that adapts to perform spline fitting tasks accordingly, providing complementary results to the aforementioned traditional methods.
Deep Parametric Shape Predictions Using Distance Fields
This work uses distance fields to transition between shape parameters like control points and input data on a pixel grid and demonstrates efficacy on 2D and 3D tasks, including font vectorization and surface abstraction.
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
This paper designs a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.
Learning Representations and Generative Models for 3D Point Clouds
A deep AutoEncoder network with state-of-the-art reconstruction quality and generalization ability is introduced with results that outperform existing methods on 3D recognition tasks and enable shape editing via simple algebraic manipulations.
CubeNet: Equivariance to 3D Rotation and Translation
A Group Convolutional Neural Network with linear equivariance to translations and right angle rotations in three dimensions is introduced, and is believed to be the first 3D rotation equivariant CNN for voxel representations.
AtlasNet: A Papier-M\^ach\'e Approach to Learning 3D Surface Generation
A method for learning to generate the surface of 3D shapes as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape.
Geometric Deep Learning: Going beyond Euclidean data
Deep neural networks are used for solving a broad range of problems from computer vision, natural-language processing, and audio analysis where the invariances of these structures are built into networks used to model them.