VINS: Visual Search for Mobile User Interface Design

  title={VINS: Visual Search for Mobile User Interface Design},
  author={Sara Bunian and Kai Li and Chaima Jemmali and Casper Harteveld and Yun Raymond Fu and Magy Seif El-Nasr},
  journal={Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems},
  • Sara Bunian, Kai Li, M. S. El-Nasr
  • Published 10 February 2021
  • Computer Science
  • Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
Searching for relative mobile user interface (UI) design examples can aid interface designers in gaining inspiration and comparing design alternatives. However, finding such design examples is challenging, especially as current search systems rely on only text-based queries and do not consider the UI structure and content into account. This paper introduces VINS, a visual search framework, that takes as input a UI image (wireframe, high-fidelity) and retrieves visually similar design examples… 

Figures and Tables from this paper

PSDoodle: Searching for App Screens via Interactive Sketching
PSDoodle is the first tool that utilizes partial sketches and searches for screens in an interactive iterative way and retrieves search results that are relevant to the user's sketch query.
Learning User Interface Semantics from Heterogeneous Networks with Multimodal and Positional Attributes
The novel Heterogeneous Attention-based Multimodal Positional Positional (HAMP) graph neural network model, which combines graph neural networks with the scaled dot-product attention used in transformers to learn the embeddings of heterogeneous nodes and associated multimodal and positional attributes in a unified manner is proposed.
PSDoodle: Fast App Screen Search via Partial Screen Doodle
PSDoodle is the first system to allow interactive search of screens via interactive sketching, which provided similar top-10 screen retrieval accuracy as the state of the art from the SWIRE line of work, while cutting the average time required about in half.
Understanding Screen Relationships from Screenshots of Smartphone Applications
Two ML models that understand similarity in different ways are trained: a screen similarity model that combines a UI object detector with a transformer model architecture to recognize instances of the same screen from a collection of screenshots from a single app, and a screen transition model that uses a siamese network architecture to identify both similarity and three types of events that appear in an interaction trace.
Conversations with GUIs
This work elicits user needs in a survey with three target groups (designers, developers, end-users), providing insights into which capabilities would be useful and how users formulate queries and demonstrates an application of a conversational assistant that interprets these queries and retrieves information from a large-scale GUI dataset.
FitVid: Responsive and Flexible Video Content Adaptation
This work presents FitVid, a system that provides responsive and customizable video content, and an adaptation pipeline that reverse-engineers pixels to retrieve design elements from videos, leveraging deep learning with a custom dataset to support mobile-optimized learning.
Screen2Words: Automatic Mobile UI Summarization with Multimodal Learning
Screen2Words is presented, a novel screen summarization approach that automatically encapsulates essential information of a UI screen into a coherent language phrase that can generate high-quality summaries for mobile screens.
Learning Semantically Rich Network-Based Multi-Modal Mobile User Interface Embeddings
This article presents a novel self-supervised model - Multi-modal Attention-based Attributed Network Embedding (MAAN) model, designed to capture structural network information present within the linkages between UI entities, as well as multi- modal attributes of the UI entity nodes.
Understanding Mobile GUI: from Pixel-Words to Screen-Sentences
A detector is trained to extract Pixel-Words from screenshots on a dataset and achieve metadata-free GUI understanding during inference, and the effectiveness of PW2SS is further verified in the GUI understanding tasks including relation prediction, clickability prediction, screen retrieval, and app type classification.
Overview of the 2021 ImageCLEFdrawnUI Task: Detection and Recognition of Hand Drawn and Digital Website UIs
Participants were challenged to develop machine learning solutions to analyze images of user interfaces and extract the position and type of its different elements, such as images, buttons and text.


Rewire: Interface Design Assistance from Examples
Rewire is an interactive system that helps designers leverage example screenshots by automatically infers a vector representation of screenshots where each UI component is a separate object with editable shape and style properties and provides three design assistance modes that help designers reuse or redraw components of the example design.
Swire: Sketch-based User Interface Retrieval
With this technique, for the first time designers can accurately retrieve relevant user interface examples with free-form sketches natural to their design workflows, and several novel applications driven by Swire are demonstrated that could greatly augment the user interface design process.
Retrieving Web Page Layouts using Sketches to Support Example-based Web Design
The users found the system very useful and the layouts designed using the system was scored highly by evaluators, and the result of an empirical user study is reported.
d.tour: style-based exploration of design example galleries
Exper exploratory techniques for finding relevant and inspiring design examples are introduced, including searching by stylistic similarity to a known example design and searching by style-based keyword.
Reverse Engineering Mobile Application User Interfaces with REMAUI (T)
  • T. Nguyen, Christoph Csallner
  • Computer Science, Art
    2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE)
  • 2015
The first technique to automatically Reverse Engineer Mobile Application User Interfaces (REMAUI) is introduced, which identifies user interface elements such as images, texts, containers, and lists, via computer vision and optical character recognition (OCR) techniques.
DesignScape: Design with Interactive Layout Suggestions
This paper presents DesignScape, a system which aids the design process by making interactive layout suggestions, i.e., changes in the position, scale, and alignment of elements, and investigates two interfaces for interacting with suggestions.
GUIFetch: Supporting App Design and Development through GUI Search
  • Farnaz Behrang, S. Reiss, A. Orso
  • Computer Science
    2018 IEEE/ACM 5th International Conference on Mobile Software Engineering and Systems (MOBILESoft)
  • 2018
GUIFetch is a technique that takes as input the sketch for an app and leverages the growing number of open source apps in public repositories to identify apps with GUIs and transitions that are similar to those in the provided sketch.
Designers frequently use examples during the design process as a way to provide a visual framework, allow for re-interpretation and allow for evaluation of design ideas. Although the use of examples
Designing with interactive example galleries
Whether people can realize significant value from explicit mechanisms for designing by example modification is explored, finding that independent raters prefer designs created with the aid of examples, that examples may benefit novices more than experienced designers, and that users prefer adaptively selected examples to random ones.
Rico: A Mobile App Dataset for Building Data-Driven Design Applications
Rico is presented, the largest repository of mobile app designs to date, created to support five classes of data-driven applications: design search, UI layout generation, UI code generation, user interaction modeling, and user perception prediction.