READ: Recursive Autoencoders for Document Layout Generation

@article{Patil2020READRA,
  title={READ: Recursive Autoencoders for Document Layout Generation},
  author={Akshay Gadi Patil and Omri Ben-Eliezer and Or Perel and Hadar Averbuch-Elor},
  journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
  year={2020},
  pages={2316-2325}
}
Layout is a fundamental component of any graphic design. Creating large varieties of plausible document layouts can be a tedious task, requiring numerous constraints to be satisfied, including local ones relating different semantic elements and global constraints on the general appearance and spacing. In this paper, we present a novel framework, coined READ, for REcursive Autoencoders for Document layout generation, to generate plausible 2D layouts of documents in large quantities and varieties… Expand
LayoutTransformer: Layout Generation and Completion with Self-attention
TLDR
This work proposes LayoutTransformer, a novel framework that leverages self-attention to learn contextual relationships between layout elements and generate novel layouts in a given domain, and can easily scale to support an arbitrary of primitives per layout. Expand
Variational Transformer Networks for Layout Generation
TLDR
This work exploits the properties of self-attention layers to capture high level relationships between elements in a layout, and uses these as the building blocks of the well-known Variational Autoencoder (VAE) formulation. Expand
DocSynth: A Layout Guided Approach for Controllable Document Image Synthesis
TLDR
This work presents a novel approach, called DocSynth, to automatically synthesize document images based on a given layout, which can successfully generate realistic and diverse document images with multiple objects. Expand
Constrained Graphic Layout Generation via Latent Optimization
TLDR
This work builds on a generative layout model based on a Transformer architecture, and formulates the layout generation as a constrained optimization problem where design constraints are used for element alignment, overlap avoidance, or any other user-specified relationship. Expand
Graph-based Deep Generative Modelling for Document Layout Generation
TLDR
This work has proposed an automated deep generative model using Graph Neural Networks (GNNs) to generate synthetic data with highly variable and plausible document layouts that can be used to train document interpretation systems, in this case, specially in digital mailroom applications. Expand
CanvasVAE: Learning to Generate Vector Graphic Documents
TLDR
This work tries to learn a generative model of vector graphic documents by defining a multi-modal set of attributes associated to a canvas and a sequence of visual elements such as shapes, images, or texts, and training variational autoencoders to learn the representation of the documents. Expand
Towards Book Cover Design via Layout Graphs
TLDR
A generative neural network is proposed that can produce book covers based on an easy-to-use layout graph and a Style Retention Network is used to transfer the learned font style onto the desired text. Expand
The Layout Generation Algorithm of Graphic Design Based on Transformer-CVAE
  • Mengxi Guo, Dangqing Huang, Xiaodong Xie
  • Computer Science
  • 2021
TLDR
This paper implemented the Transformer model and conditional variational autoencoder (CVAE) to the graphic design layout generation task and proposed an end-to-end graphic design layouts generation model named LayoutT-CVAe, which significantly increased the controllability and interpretability of the deep model. Expand
Retrieve-Then-Adapt: Example-based Automatic Generation for Proportion-related Infographics
TLDR
A MCMC-like approach is proposed and leverage recursive neural networks to help adjust the initial draft and improve its visual appearance iteratively, until a satisfactory result is obtained. Expand
Creating User Interface Mock-ups from High-Level Text Descriptions with Deep-Learning Models
  • Forrest Huang, Gang Li, Xin Zhou, J. Canny, Yang Li
  • Computer Science
  • 2021
The design process of user interfaces (UIs) often begins with articulating high-level design goals. Translating these high-level design goals into concrete design mock-ups, however, requiresExpand
...
1
2
...

References

SHOWING 1-10 OF 38 REFERENCES
GRASS: Generative Recursive Autoencoders for Shape Structures
TLDR
A novel neural network architecture for encoding and synthesis of 3D shapes, particularly their structures, is introduced and it is demonstrated that without supervision, the network learns meaningful structural hierarchies adhering to perceptual grouping principles, produces compact codes which enable applications such as shape classification and partial matching, and supports shape synthesis and interpolation with significant variations in topology and geometry. Expand
LayoutVAE: Stochastic Scene Layout Generation From a Label Set
TLDR
LayoutVAE is a versatile modeling framework that allows for generating full image layouts given a label set, or per label layouts for an existing image given a new label, and is also capable of detecting unusual layouts, potentially providing a way to evaluate layout generation problem. Expand
Content-aware generative modeling of graphic design layouts
TLDR
This paper proposes a deep generative model for graphic design layouts that is able to synthesize layout designs based on the visual and textual semantics of user inputs, and internally learns powerful features that capture the subtle interaction between contents and layouts, which are useful for layout-aware design retrieval. Expand
SCORES: Shape Composition with Recursive Substructure Priors
TLDR
Results of shape composition from multiple sources over different categories of man-made shapes are shown and compare with state-of-the-art alternatives, demonstrating that the network can significantly expand the range of composable shapes for assembly-based modeling. Expand
LayoutGAN: Generating Graphic Layouts with Wireframe Discriminators
TLDR
A novel differentiable wireframe rendering layer is proposed that maps the generated layout to a wireframe image, upon which a CNN-based discriminator is used to optimize the layouts in image space. Expand
GRAINS: Generative Recursive Autoencoders for INdoor Scenes
TLDR
A generative neural network which enables us to generate plausible 3D indoor scenes in large quantities and varieties, easily and highly efficiently, and shows applications of GRAINS including 3D scene modeling from 2D layouts, scene editing, and semantic scene segmentation via PointNet. Expand
Jointly Measuring Diversity and Quality in Text Generation Models
TLDR
This paper proposes metrics to evaluate both the quality and diversity simultaneously by approximating the distance of the learned generative model and the real data distribution by introducing a metric that approximates this distance using n-gram based measures. Expand
High Performance Document Layout Analysis
TLDR
This paper summarize research in document layout analysis carried out over the last few years in the laboratory, which has developed a number of novel geometric algorithms and statistical methods that are applicable to a wide variety of languages and layouts. Expand
Learning to Extract Semantic Structure from Documents Using Multimodal Fully Convolutional Neural Networks
TLDR
An end-to-end, multimodal, fully convolutional network for extracting semantic structures from document images using a unified model that classifies pixels based not only on their visual appearance, as in the traditional page segmentation task, but also on the content of underlying text. Expand
Learning Layouts for Single-PageGraphic Designs
This paper presents an approach for automatically creating graphic design layouts using a new energy-based model derived from design principles. The model includes several new algorithms forExpand
...
1
2
3
4
...