Cloud2Curve: Generation and Vectorization of Parametric Sketches

@article{Das2021Cloud2CurveGA,
  title={Cloud2Curve: Generation and Vectorization of Parametric Sketches},
  author={Ayan Das and Yongxin Yang and Timothy M. Hospedales and Tao Xiang and Yi-Zhe Song},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={7084-7093}
}
  • Ayan Das, Yongxin Yang, +2 authors Yi-Zhe Song
  • Published 29 March 2021
  • Computer Science
  • 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Analysis of human sketches in deep learning has advanced immensely through the use of waypoint-sequences rather than raster-graphic representations. We further aim to model sketches as a sequence of low-dimensional parametric curves. To this end, we propose an inverse graphics framework capable of approximating a raster or waypoint based stroke encoded as a point-cloud with a variable-degree Bézier curve. Building on this module, we present Cloud2Curve, a generative model for scalable high… Expand

Figures and Tables from this paper

Keypoint-Driven Line Drawing Vectorization via PolyVector Flow
Fig. 1. Given a greyscale bitmap drawing, we use deep learning–based machinery to extract keypoints: junctions, curve endpoints, and sharp corners. We then compute a frame field aligned to theExpand
SketchGen: Generating Constrained CAD Sketches
TLDR
SketchGen is proposed as a generative model based on a transformer architecture to address the heterogeneity problem by carefully designing a sequential language for the primitives and constraints that allows distinguishing between different primitive or constraint types and their parameters, while encouraging the model to re-use information across related parameters, encoding shared structure. Expand
SVG-Net: An SVG-based Trajectory Prediction Model
TLDR
SVG has the potential to provide the convenience and generality of raster-based solutions if coupled with a powerful tool such as CNNs, and is introduced, for which SVG-Net is a Transformer-based Neural Network that can effectively capture the scene’s information from SVG inputs. Expand

References

SHOWING 1-10 OF 53 REFERENCES
Sketch-a-Net: A Deep Neural Network that Beats Humans
TLDR
It is shown that state-of-the-art deep networks specifically engineered for photos of natural objects fail to perform well on sketch recognition, regardless whether they are trained using photos or sketches. Expand
Sketchformer: Transformer-Based Representation for Sketched Structure
TLDR
It is shown that sketch reconstruction and interpolation are improved significantly by the Sketchformer embedding for complex sketches with longer stroke sequences, when compared against baseline representations driven by LSTM sequence to sequence architectures: SketchRNN and derivatives. Expand
Learning to Sketch with Shortcut Cycle Consistency
TLDR
A novel approach for translating an object photo to a sketch, mimicking the human sketching process, and shows that the synthetic sketches can be used to train a better fine-grained sketch-based image retrieval model, effectively alleviating the problem of sketch data scarcity. Expand
Deep Learning for Free-Hand Sketch: A Survey
TLDR
A comprehensive survey of the deep learning techniques oriented at free-hand sketch data, and the applications that they enable. Expand
A Learned Representation for Scalable Vector Graphics
TLDR
This work attempts to model the drawing process of fonts by building sequential generative models of vector graphics, which has the benefit of providing a scale-invariant representation for imagery whose latent representation may be systematically manipulated and exploited to perform style propagation. Expand
Sketch-a-Net that Beats Humans
TLDR
A multi-scale multi-channel deep neural network framework that yields sketch recognition performance surpassing that of humans, and not only delivers the best performance on the largest human sketch dataset to date, but also is small in size making efficient training possible using just CPUs. Expand
Sketch Less for More: On-the-Fly Fine-Grained Sketch-Based Image Retrieval
TLDR
A reinforcement learning based cross-modal retrieval framework that directly optimizes rank of the ground-truth photo over a complete sketch drawing episode and introduces a novel reward scheme that circumvents the problems related to irrelevant sketch strokes, and thus provides us with a more consistent rank list during the retrieval. Expand
Doodle to Search: Practical Zero-Shot Sketch-Based Image Retrieval
TLDR
This paper proposes a novel ZS-SBIR framework to jointly model sketches and photos into a common embedding space, and forms a novel strategy to mine the mutual information among domains is specifically engineered to alleviate the domain gap. Expand
Generalising Fine-Grained Sketch-Based Image Retrieval
  • Kaiyue Pang, Ke Li, +4 authors Yi-Zhe Song
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
A novel unsupervised learning approach to model a universal manifold of prototypical visual sketch traits that can be used to paramaterise the learning of a sketch/photo representation, and demonstrates the efficacy of this approach in enabling cross-category generalisation of FG-SBIR. Expand
StrokeNet: A Neural Painting Environment
TLDR
StrokeNet is presented, a novel model where the agent is trained upon a wellcrafted neural approximation of the painting environment, and was able to learn to write characters such as MNIST digits faster than reinforcement learning approaches in an unsupervised manner. Expand
...
1
2
3
4
5
...