How Far Are We From Quantifying Visual Attention in Mobile HCI?

@article{Bce2020HowFA,
  title={How Far Are We From Quantifying Visual Attention in Mobile HCI?},
  author={Mihai B{\^a}ce and Sander Staal and Andreas Bulling},
  journal={IEEE Pervasive Computing},
  year={2020},
  volume={19},
  pages={46-55}
}
With an ever-increasing number of mobile devices competing for attention, quantifying when, how often, or for how long users look at their devices has emerged as a key challenge in mobile human-computer interaction. Encouraged by recent advances in automatic eye contact detection using machine learning and device-integrated cameras, we provide a fundamental investigation into the feasibility of quantifying overt visual attention during everyday mobile interactions. In this article, we discuss… Expand
Eyewear 2021 The Forth Workshop on Eyewear Computing – Augmenting Social Situations and Democratizing Tools
TLDR
This workshop focuses on supporting large-scale uses of eyewear computing, discussing lessons learned from early deployment and how to empower the community with better hardware/software prototyping tools as well as the establishment of open data sets. Expand

References

SHOWING 1-10 OF 26 REFERENCES
Forecasting user attention during everyday mobile interactions using device-integrated and wearable sensors
TLDR
This work presents a novel long-term dataset of everyday mobile phone interactions, continuously recorded from 20 participants engaged in common activities on a university campus over 4.5 hours each, and proposes a proof-of-concept method that can forecast bidirectional attention shifts and predict whether the primary attentional focus is on the handheld mobile device. Expand
Everyday Eye Contact Detection Using Unsupervised Gaze Target Discovery
TLDR
A novel method for eye contact detection that combines a state-of-the-art appearance-based gaze estimator with a novel approach for unsupervised gaze target discovery, i.e. without the need for tedious and time-consuming manual data annotation is presented. Expand
Gaze locking: passive eye contact detection for human-object interaction
TLDR
This work proposes a passive, appearance-based approach for sensing eye contact in an image by focusing on gaze *locking* rather than gaze tracking, and demonstrates how this method facilitates human-object interaction, user analytics, image filtering, and gaze-triggered photography. Expand
EyeTab: model-based gaze estimation on unmodified tablet computers
TLDR
EyeTab is presented, a model-based approach for binocular gaze estimation that runs entirely on an unmodified tablet and builds on set of established image processing and computer vision algorithms and adapts them for robust and near-realtime gaze estimation. Expand
Eye Tracking for Everyone
TLDR
iTracker, a convolutional neural network for eye tracking, is trained, which achieves a significant reduction in error over previous approaches while running in real time (10-15fps) on a modern mobile device. Expand
In the Eye of the Beholder: A Survey of Models for Eyes and Gaze
  • D. Hansen, Q. Ji
  • Computer Science, Medicine
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2010
TLDR
This review shows that, despite their apparent simplicity, the development of a general eye detection technique involves addressing many challenges, requires further theoretical developments, and is consequently of interest to many other domains problems in computer vision and beyond. Expand
Pupil: an open source platform for pervasive eye tracking and mobile gaze-based interaction
TLDR
Pupil is an accessible, affordable, and extensible open source platform for pervasive eye tracking and gaze-based interaction and includes state-of-the-art algorithms for real-time pupil detection and tracking, calibration, and accurate gaze estimation. Expand
Understanding Face and Eye Visibility in Front-Facing Cameras of Smartphones used in the Wild
TLDR
It is found that a state-of-the-art face detection algorithm performs poorly against photos taken from front-facing cameras, and that in most cases the face is only partially visible. Expand
EyePliances: attention-seeking devices that respond to visual attention
TLDR
EyePliances are appliances and devices that detect and respond to human visual attention using eye contact sensors that receive implicit input from users, in the form of eye gaze, and respond by opening communication channels. Expand
Appearance-based gaze estimation in the wild
TLDR
An extensive evaluation of several state-of-the-art image-based gaze estimation algorithms on three current datasets, including the MPIIGaze dataset, which contains 213,659 images collected from 15 participants during natural everyday laptop use over more than three months. Expand
...
1
2
3
...