• Corpus ID: 235422030

A Multi-Implicit Neural Representation for Fonts

@article{Reddy2021AMN,
  title={A Multi-Implicit Neural Representation for Fonts},
  author={Pradyumna Reddy and Zhifei Zhang and Matthew Fisher and Hailin Jin and Zhaowen Wang and Niloy Jyoti Mitra},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.06866}
}
Fonts are ubiquitous across documents and come in a variety of styles. They are either represented in a native vector format or rasterized to produce fixed resolution images. In the first case, the non-standard representation prevents benefiting from latest network architectures for neural representations; while, in the latter case, the rasterized representation, when encoded via networks, results in loss of data fidelity, as font-specific discontinuities like edges and corners are difficult to… 

Figures and Tables from this paper

Towards Layer-wise Image Vectorization

This work proposes Layer-wise Image Vectorization, namely LIVE, to convert raster images to SVGs and simultaneously maintain its image topology, and demonstrates that LIVE presents more plausible vectorized forms than prior works and can be generalized to new images.

References

SHOWING 1-10 OF 30 REFERENCES

Im2Vec: Synthesizing Vector Graphics without Vector Supervision

A new neural network is proposed that can generate complex vector graphics with varying topologies, and only requires indirect supervision from readily-available raster training images (i.e., with no vector counterparts).

A Learned Representation for Scalable Vector Graphics

This work attempts to model the drawing process of fonts by building sequential generative models of vector graphics, which has the benefit of providing a scale-invariant representation for imagery whose latent representation may be systematically manipulated and exploited to perform style propagation.

DeepSVG: A Hierarchical Generative Network for Vector Graphics Animation

This work proposes a novel hierarchical generative network, called DeepSVG, for complex SVG icons generation and interpolation, and demonstrates that it learns to accurately reconstruct diverse vector graphics, and can serve as a powerful animation tool by performing interpolations and other latent space operations.

Multi-content GAN for Few-Shot Font Style Transfer

This work focuses on the challenge of taking partial observations of highly-stylized text and generalizing the observations to generate unobserved glyphs in the ornamented typeface, and proposes an end-to-end stacked conditional GAN model considering content along channels and style along network layers.

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.

Learning Implicit Fields for Generative Shape Modeling

  • Zhiqin ChenHao Zhang
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
By replacing conventional decoders by the implicit decoder for representation learning and shape generation, this work demonstrates superior results for tasks such as generative shape modeling, interpolation, and single-view 3D reconstruction, particularly in terms of visual quality.

PolyFit : perception-aligned vectorization of raster clip-art via intermediate polygonal fitting

This work presents PolyFit, a new clip-art vectorization method that outperforms state-of-the-art approaches on a wide range of data, where its results are preferred three times as often as those of the closest competitor across multiple types of inputs with various resolutions.

Differentiable vector graphics rasterization for editing and learning

We introduce a differentiable rasterizer that bridges the vector graphics and raster image domains, enabling powerful raster-based loss functions, optimization procedures, and machine learning

DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation

This work introduces DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data.

Attribute2Font: Creating Fonts You Want From Attributes

A novel model, Attribute2Font, is proposed to automatically create fonts by synthesizing visually-pleasing glyph images according to user-specified attributes and their corresponding values, which is the first one in the literature which is capable of generating glyph images in new font styles, instead of retrieving existing fonts, according to given values of specified font attributes.