Artistic glyph image synthesis via one-stage few-shot learning

@article{Gao2019ArtisticGI,
  title={Artistic glyph image synthesis via one-stage few-shot learning},
  author={Yue Gao and Yuan Guo and Zhouhui Lian and Yingmin Tang and Jianguo Xiao},
  journal={ACM Transactions on Graphics (TOG)},
  year={2019},
  volume={38},
  pages={1 - 12}
}
Automatic generation of artistic glyph images is a challenging task that attracts many research interests. Previous methods either are specifically designed for shape synthesis or focus on texture transfer. In this paper, we propose a novel model, AGIS-Net, to transfer both shape and texture styles in one-stage with only a few stylized samples. To achieve this goal, we first disentangle the representations for content and style by using two encoders, ensuring the multi-content and multi-style… 

GAS-Net: Generative Artistic Style Neural Networks for Fonts

This project aims to develop a few-shot cross-lingual font generator based on AGIS-Net and improve the performance metrics mentioned in Section 3.

Few-shot Font Generation with Weakly Supervised Localized Representations

This paper proposes a novel font generation method that learns localized styles, namely component-wise style representations, instead of universal styles, and shows remarkably better few-shot font generation results than other state-of-the-art methods.

Learning Implicit Glyph Shape Representation

This structured implicit representation is shown to be better suited for glyph modeling, and enables rendering glyph images at arbitrary high resolutions, and performs well on the challenging one-shot font style transfer task comparing to other alternatives both qualitatively and quantitatively.

Stylized Image Generation based on Music-image Synesthesia Emotional Style Transfer using CNN Network

Emotional style of multimedia art works are abstract content information. This study aims to explore emotional style transfer method and find the possible way of matching music with appropriate

Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Experts

This work proposes a novel FFG method, named Multiple Localized Experts Few-shot Font Generation Network (MX-Font), which extracts multiple style features not explicitly conditioned on component labels, but automatically by multiple experts to represent different local concepts.

Few-shot Font Generation with Localized Style Representations and Factorization

This paper proposes a novel font generation method by learning localized styles, namely component-wise style representations, instead of universal styles, that shows remarkably better few-shot font generation results than other state-of-the-arts methods.

Few-shot Compositional Font Generation with Dual Memory

This paper proposes a novel font generation framework, named Dual Memory-augmented Font Generation Network (DM-Font), which enables us to generate a high-quality font library with only a few samples, and employs memory components and global-context awareness in the generator to take advantage of the compositionality.

FontTransformer: Few-shot High-resolution Chinese Glyph Image Synthesis via Stacked Transformers

FontTransformer, a novel few-shot learning model for high-resolution Chinese glyph image synthesis by using stacked Transformers to avoid the accumulation of prediction errors and utilize the serial Transformer to enhance the quality of synthesized strokes is proposed.

DSE-Net: Artistic Font Image Synthesis via Disentangled Style Encoding

A disentangled style encoding network, termed DSE-Net, to synthesize artistic fonts, and a cross-layer fusion mechanism to improve the artistic fonts' structure and texture according to their different representations in CNN.

Few-Shot Font Generation by Learning Fine-Grained Local Styles

A new font generation approach by learning the fine-grained local styles from references, and the spatial correspondence between the content and reference glyphs, which outperforms the state-of-the-art methods in FFG.
...

References

SHOWING 1-10 OF 46 REFERENCES

Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis

This work proposes a simple yet effective regularization term to address the mode collapse issue for cGANs and explicitly maximizes the ratio of the distance between generated images with respect to the corresponding latent codes, thus encouraging the generators to explore more minor modes during training.

TET-GAN: Text Effects Transfer via Stylization and Destylization

This paper proposes a novel Texture Effects Transfer GAN (TET-GAN), which consists of a stylization sub network and a destylization subnetwork, and demonstrates the superiority of the proposed method in generating high-quality stylized text over the state-of-the-art methods.

The Contextual Loss for Image Transformation with Non-Aligned Data

This work presents an alternative loss function that does not require alignment, thus providing an effective and simple solution for a new space of problems.

Separating Style and Content for Generalized Style Transfer

This work attempts to separate the representations for styles and contents, and proposes a generalized style transfer network consisting of style encoder, content encoder), mixer and decoder, which allows simultaneous style transfer among multiple styles and can be deemed as a special 'multi-task' learning scenario.

Multi-content GAN for Few-Shot Font Style Transfer

This work focuses on the challenge of taking partial observations of highly-stylized text and generalizing the observations to generate unobserved glyphs in the ornamented typeface, and proposes an end-to-end stacked conditional GAN model considering content along channels and style along network layers.

Image-to-Image Translation with Conditional Adversarial Networks

Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.

Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks

The architecture introduced in this paper learns a mapping function G : X 7→ Y using an adversarial loss such thatG(X) cannot be distinguished from Y , whereX and Y are images belonging to two

Toward Multimodal Image-to-Image Translation

This work aims to model a distribution of possible outputs in a conditional generative modeling setting that helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse.

Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks

This work presents an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples, and introduces a cycle consistency loss to push F(G(X)) ≈ X (and vice versa).

A guide to convolution arithmetic for deep learning

A guide to help deep learning practitioners understand and manipulate convolutional neural network architectures and clarifies the relationship between various properties of Convolutional, pooling and transposedconvolutional layers.