• Corpus ID: 245634519

VisQA: Quantifying Information Visualisation Recallability via Question Answering

@article{Wang2021VisQAQI,
  title={VisQA: Quantifying Information Visualisation Recallability via Question Answering},
  author={Yaohua Wang and Chuhan Jiao and Mihai B{\^a}ce and Andreas Bulling},
  journal={ArXiv},
  year={2021},
  volume={abs/2112.15217}
}
—Despite its importance for assessing the effectiveness of communicating information visually, fine-grained recallability of information visualisations has not been studied quantitatively so far. In this work we propose a question-answering paradigm to study visualisation recallability and present VisRecall — a novel dataset consisting of 200 visualisations that are annotated with crowd-sourced human (N = 305) recallability scores obtained from 1,000 questions from five question types… 
1 Citations
Impact of Gaze Uncertainty on AOIs in Information Visualisations
TLDR
This work contributes a novel investigation into gaze uncertainty and quantifies its impact on AOI-based analysis on visualisations using two novel metrics: the Flipping Candidate Rate (FCR) and Hit Any AOi Rate (HAAR).

References

SHOWING 1-10 OF 60 REFERENCES
Exploring visual attention and saliency modeling for task-based visual analysis
Adam: A Method for Stochastic Optimization
TLDR
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Very Deep Convolutional Networks for Large-Scale Image Recognition
TLDR
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Predicting Visual Importance Across Graphic Design Types
TLDR
A Unified Model of Saliency and Importance (UMSI), which learns to predict visual importance in input graphic designs, and saliency in natural images, along with a new dataset and applications, and includes an automatic classification module to classify the input is introduced.
Xception: Deep Learning with Depthwise Separable Convolutions
  • François Chollet
  • Computer Science
    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
TLDR
This work proposes a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions, and shows that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset, and significantly outperforms it on a larger image classification dataset.
Beyond Memorability: Visualization Recognition and Recall
TLDR
It is shown that visualizations memorable “at-a-glance” are also capable of effectively conveying the message of the visualization, and thus, a memorable visualization is often also an effective one.
PlotQA: Reasoning over Scientific Plots
TLDR
This work proposes PlotQA, a more holistic model which can address fixed vocabulary as well as OOV questions, and proposes a hybrid approach: Specific questions are answered by choosing the answer from a fixed vocabulary or by extracting it from a predicted bounding box in the plot, while other Questions are answered with a table question-answering engine which is fed with a structured table generated by detecting visual elements from the image.
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
VQA: Visual Question Answering
We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language
Low-level components of analytic activity in information visualization
TLDR
This work presents a set of ten low level analysis tasks that largely capture people's activities while employing information visualization tools for understanding data, and hopes that the tasks may provide a form of checklist for system designers.
...
...