ULDGNN: A Fragmented UI Layer Detector Based on Graph Neural Networks

@article{Li2022ULDGNNAF,
  title={ULDGNN: A Fragmented UI Layer Detector Based on Graph Neural Networks},
  author={Jiazhi Li and Tingting Zhou and Yun-nong Chen and Ya-Hsuan Chang and Yankun Zhen and Lingyun Sun and Liuqing Chen},
  journal={ArXiv},
  year={2022},
  volume={abs/2208.06658}
}
While some work attempt to generate front-end code in-telligently from UI screenshots, it may be more convenient to utilize UI design drafts in Sketch which is a popular UI design software, because we can access multimodal UI information directly such as layers type, position, size, and visual images. However, fragmented layers could degrade the code quality without being merged into a whole part if all of them are involved in the code generation. In this paper, we propose a pipeline to merge… 

References

SHOWING 1-10 OF 38 REFERENCES

Sketch2Code: Transformation of Sketches to UI in Real-time Using Deep Neural Network

A novel approach wherein a Deep Neural Network is employed that is trained on a custom database of such sketches to detect UI elements in the input sketch to create a UI prototype for multiple platforms with single training.

Owl Eyes: Spotting UI Display Issues via Visual Understanding

This work proposes a novel approach, OwlEye, based on deep learning for modelling visual information of the GUI screenshot, which can detect GUIs with display issues and also locate the detailed region of the issue in the given GUI for guiding developers to fix the bug.

From UI Design Image to GUI Skeleton: A Neural Machine Translator to Bootstrap Mobile GUI Implementation

This paper presents a neural machine translator that combines recent advances in computer vision and machine translation for translating a UI design image into a GUI skeleton, without requiring manual rule development.

GUIGAN: Learning to Generate GUI Designs Using Generative Adversarial Networks

This work develops a model tool to automatically generate GUI designs based on SeqGAN by modelling the GUI component style compatibility and GUI structure and demonstrates that the model significantly outperforms the best of the baseline methods.

Improving random GUI testing with image-based widget detection

This work proposes a technique for improving GUI testing by automatically identifying GUI widgets in screen shots using machine learning techniques and provides guidance to GUI testing tools in environments not currently supported by deriving GUI widget information from screen shots only.

Machine Learning-Based Prototyping of Graphical User Interfaces for Mobile Apps

This paper presents an approach that automates this process by enabling accurate prototyping of GUIs via three tasks: detection, classification, and assembly, and implemented this approach for Android in a system called ReDraw.

Graph Attention Networks

We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior

3D Graph Neural Networks for RGBD Semantic Segmentation

This paper proposes a 3D graph neural network (3DGNN) that builds a k-nearest neighbor graph on top of 3D point cloud that uses back-propagation through time to train the model.

StoryDroid: Automated Generation of Storyboard for Android Apps

Inspired by the conception of storyboard in movie production, a system, StoryDroid, is proposed to automatically generate the storyboard for Android apps, and assist different roles to review apps efficiently.

pix2code: Generating Code from a Graphical User Interface Screenshot

It is shown that deep learning methods can be leveraged to train a model end-to-end to automatically reverse engineer user interfaces and generate code from a single input image with over 77% of accuracy for three different platforms.