Aesthetic Text Logo Synthesis via Content-aware Layout Inferring

@article{Wang2022AestheticTL,
  title={Aesthetic Text Logo Synthesis via Content-aware Layout Inferring},
  author={Yizhi Wang and Guo Pu and Wenhan Luo and Yexin Wang and Pengfei Xiong and Hongwen Kang and Zhouhui Lian},
  journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022},
  pages={2426-2435}
}
  • Yizhi WangGuo Pu Z. Lian
  • Published 6 April 2022
  • Computer Science
  • 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Text logo design heavily relies on the creativity and expertise of professional designers, in which arranging element layouts is one of the most important procedures. However, few attention has been paid to this task which needs to take many factors (e.g., fonts, linguistics, topics, etc.) into consideration. In this paper, we propose a content-aware layout generation network which takes glyph images and their corresponding text as input and synthesizes aesthetic layouts for them automatically… 

StrokeGAN+: Few-Shot Semi-Supervised Chinese Font Generation with Stroke Encoding

Experimental results show that the mode collapse issue can be effectively alleviated by the introduced one-bit stroke encoding and few-shot semi-supervised training scheme, and that the proposed model outperforms the state-of-the-art models in fourteen font generation tasks in terms of four important evaluation metrics and the quality of generated characters.

SGCE-Font: Skeleton Guided Channel Expansion for Chinese Font Generation

Numerical results show that the mode collapse issue suffered by the known CycleGAN can be effectively alleviated by equipping with the proposed SGCE module, and the CycleGAN equipped with SGCE outperforms the state-of-the-art models in terms of four important evaluation metrics and visualization quality.

LayoutDETR: Detection Transformer Is a Good Multimodal Layout Designer

This study proposes LayoutDETR, a layout model that inherits the high quality and realism from generative modeling, in the meanwhile reformulating content-aware requirements as a detection problem: it learns to detect in a background image the reasonable locations, scales, and spatial relations for multimodal elements in a layout.

ABLE: Aesthetic Box Lunch Editing

This paper proposes an exploratory research that contains a pre-trained ordering recovery model to obtain correct placement sequences from box lunch images, and a generative adversarial network to

References

SHOWING 1-10 OF 38 REFERENCES

Content-aware generative modeling of graphic design layouts

This paper proposes a deep generative model for graphic design layouts that is able to synthesize layout designs based on the visual and textual semantics of user inputs, and internally learns powerful features that capture the subtle interaction between contents and layouts, which are useful for layout-aware design retrieval.

Artistic glyph image synthesis via one-stage few-shot learning

This paper proposes a novel model, AGIS-Net, to transfer both shape and texture styles in one-stage with only a few stylized samples, and proves the superiority of the model in generating high-quality stylized glyph images against other state-of-the-art methods.

Layout Generation and Completion with Self-attention

A novel framework that leverages a self-attention based approach to learn contextual relationships between layout elements and generate layouts in a given domain is proposed, which improves upon the state-of-the-art approaches in layout generation.

LayoutTransformer: Layout Generation and Completion with Self-attention

This work proposes LayoutTransformer, a novel framework that leverages self-attention to learn contextual relationships between layout elements and generate novel layouts in a given domain, and can easily scale to support an arbitrary of primitives per layout.

Variational Transformer Networks for Layout Generation

This work exploits the properties of self-attention layers to capture high level relationships between elements in a layout, and uses these as the building blocks of the well-known Variational Autoencoder (VAE) formulation.

Rethinking Text Segmentation: A Novel Dataset and A Text-Specific Refinement Approach

This work proposes TextSeg, a large-scale fine-annotated text dataset with six types of annotations, and introduces Text Refinement Network (TexRNet), a novel text segmentation approach that adapts to the unique properties of text, e.g. non-convex boundary, diverse texture, etc., which often impose burdens on traditional segmentation models.

Attribute-Conditioned Layout GAN for Automatic Graphic Design

Attribute-conditioned Layout GAN is introduced to incorporate the attributes of design elements for graphic layout generation by forcing both the generator and the discriminator to meet attribute conditions.

Neural Design Network: Graphic Layout Generation with Constraints

A method for design layout generation that can satisfy user-specified constraints and is demonstrated to be visually similar to real design layouts is proposed.

Attribute2Font: Creating Fonts You Want From Attributes

A novel model, Attribute2Font, is proposed to automatically create fonts by synthesizing visually-pleasing glyph images according to user-specified attributes and their corresponding values, which is the first one in the literature which is capable of generating glyph images in new font styles, instead of retrieving existing fonts, according to given values of specified font attributes.

LayoutVAE: Stochastic Scene Layout Generation From a Label Set

LayoutVAE is a versatile modeling framework that allows for generating full image layouts given a label set, or per label layouts for an existing image given a new label, and is also capable of detecting unusual layouts, potentially providing a way to evaluate layout generation problem.