Corpus ID: 204801124

Sketch2Code: Transformation of Sketches to UI in Real-time Using Deep Neural Network

  title={Sketch2Code: Transformation of Sketches to UI in Real-time Using Deep Neural Network},
  author={Vanita Jain and Piyush Agrawal and Subham Banga and Rishabh Kapoor and Shashwat Gulyani},
User Interface (UI) prototyping is a necessary step in the early stages of application development. Transforming sketches of a Graphical User Interface (UI) into a coded UI application is an uninspired but time-consuming task performed by a UI designer. An automated system that can replace human efforts for straightforward implementation of UI designs will greatly speed up this procedure. The works that propose such a system primarily focus on using UI wireframes as input rather than hand-drawn… Expand
Automatic code generation from sketches of mobile applications in end-user development using Deep Learning
The Sketch2aia approach employs deep learning to detect the most frequent user interface components and their position on a hand-drawn sketch creating an intermediate representation of the user interface and then automatically generates the App Inventor code of the wireframe. Expand
Deep Learning for Free-Hand Sketch: A Survey
A comprehensive survey of the deep learning techniques oriented at free-hand sketch data, and the applications that they enable. Expand
Deep learning-based prototyping of android GUI from hand-drawn mockups
YOLOv5 is a fast and accurate deep learning framework to automate the process of converting hand-drawn GUI mockups into Android-based GUI prototype and Experimental results show the effectiveness of the proposed approach in generating a visually appealing Android GUI from hand- drawn mockups. Expand
STML (Sketch to Markup Language)
The creation of the boilerplate code for a website becomes much less time-consuming, which gives developers the freedom to test out many different designs and layouts before opting for the one that best suits their needs. Expand
Closing the gap between designers and developers in a low code ecosystem
This work developed an innovative approach using model transformation and meta-modelling techniques that drastically improves the efficiency of transforming UX/UI design artefacts into low-code web-technology. Expand
Text2App: A Framework for Creating Android Apps from Text Descriptions
It is demonstrated that Text2App generalizes well to unseen combination of app components and it is capable of handling noisy natural language instructions, and the possibility of creating applications from highly abstract instructions by coupling the system with GPT-3 – a large pretrained language model. Expand


pix2code: Generating Code from a Graphical User Interface Screenshot
It is shown that deep learning methods can be leveraged to train a model end-to-end to automatically reverse engineer user interfaces and generate code from a single input image with over 77% of accuracy for three different platforms. Expand
Sketch classification with deep learning models
The proposed system, which utilizes VGG-16 network model and performs two-stage fine-tuning, outperforms the previous state-of-the-art approaches on the TU Berlin sketch dataset by reaching 79,72% accuracy. Expand
Reverse Engineering Mobile Application User Interfaces with REMAUI (T)
  • T. Nguyen, C. Csallner
  • Computer Science
  • 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE)
  • 2015
The first technique to automatically Reverse Engineer Mobile Application User Interfaces (REMAUI) is introduced, which identifies user interface elements such as images, texts, containers, and lists, via computer vision and optical character recognition (OCR) techniques. Expand
Feature Pyramid Networks for Object Detection
This paper exploits the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost and achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles. Expand
Scalable Object Detection Using Deep Neural Networks
This work proposes a saliency-inspired neural network model for detection, which predicts a set of class-agnostic bounding boxes along with a single score for each box, corresponding to its likelihood of containing any object of interest. Expand
Visual Tracking with Fully Convolutional Networks
An in-depth study on the properties of CNN features offline pre-trained on massive image data and classification task on ImageNet shows that the proposed tacker outperforms the state-of-the-art significantly. Expand
You Only Look Once: Unified, Real-Time Object Detection
Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background, and outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork. Expand
Deep Residual Learning for Image Recognition
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. Expand
A review of the use of examples for automating architectural design tasks
The review shows that initial hand-operated SGs gave way to automatic generation, which in turn developed into automated SG extraction, through increasing levels of computational capabilities, and as an overall result, example-based research perspectives raise important possibilities for intelligent design systems. Expand
Optical character recognition: an illustrated guide to the frontier
A perspective on the performance of current OCR systems is offered by illustrating and explaining actual OCR errors made by three commercial devices, and possible approaches for improving the accuracy of today's systems are pointed to. Expand