I know what you are reading: recognition of document types using mobile eye tracking
@inproceedings{Kunze2013IKW, title={I know what you are reading: recognition of document types using mobile eye tracking}, author={Kai Kunze and Yuzuko Utsumi and Yuki Shiga and Koichi Kise and Andreas Bulling}, booktitle={International Semantic Web Conference}, year={2013} }
Reading is a ubiquitous activity that many people even perform in transit, such as while on the bus or while walking. Tracking reading enables us to gain more insights about expertise level and potential knowledge of users -- towards a reading log tracking and improve knowledge acquisition. As a first step towards this vision, in this work we investigate whether different document types can be automatically detected from visual behaviour recorded using a mobile eye tracker. We present an…
77 Citations
Using the Eye Gaze to Predict Document Reading Subjective Understanding
- Computer Science2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR)
- 2017
It is proved that the subjective understanding of the reader can be predicted more accurately by using the eye gaze than by asking a multiple choice question.
Real-life Activity Recognition - Focus on Recognizing Reading Activities
- Computer ScienceCBDAR
- 2013
How to use body-worn devices for activity recognition and how to combine them with infrastructure sensing, in general is focused on.
The Wordometer -- Estimating the Number of Words Read Using Document Image Retrieval and Mobile Eye Tracking
- Computer Science2013 12th International Conference on Document Analysis and Recognition
- 2013
The Wordometer is introduced, a novel method to estimate the number of words a user reads using a mobile eye tracker and document image retrieval and it is believed the Wordometer can be used as a step counter for the information the authors read to make their knowledge life healthier.
Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition
- Computer ScienceProc. ACM Interact. Mob. Wearable Ubiquitous Technol.
- 2018
This paper employs the addition of mid-level gaze features; features that add a level of abstraction over low-level features with some knowledge of the activity, but not of the stimulus, to strengthen eye-based activity recognition.
Where Are You Looking At? - Feature-Based Eye Tracking on Unmodified Tablets
- Computer Science2013 2nd IAPR Asian Conference on Pattern Recognition
- 2013
This paper introduces the feature-based approach and the eye tracking system working on a commodity tablet and recorded the data of 5 subjects following an animation on screen as reference.
Hand-eye Coordination for Textual Difficulty Detection in Text Summarization
- Computer ScienceICMI
- 2020
This paper categorize the summary writing process into different phases and extract different gaze and typing features from each phase according to characteristics of eye-gaze behaviors and typing dynamics, and builds a classifier that achieves an accuracy of 91.0% for difficulty level detection.
EToS-1: Eye Tracking on Shopfloors for User Engagement with Automation
- Computer ScienceAutomationXP@CHI
- 2022
The overarching goal is to better understand the visual inspection behavior of machine operators on shopfloors and to find ways to provide them with attention-aware and context-aware assistance through MR headsets that increasingly come with eye tracking (ET) as a default feature.
Reading Activity Recognition Using an Off-the-Shelf EEG -- Detecting Reading Activities and Distinguishing Genres of Documents
- Computer Science2013 12th International Conference on Document Analysis and Recognition
- 2013
A new paradigm focusing on recognizing the activities and habits of users while they are reading is introduced, and evidence that reading and non-reading related activities can be separated over 3 users using 6 classes, perfectly separating reading from non- reading is presented.
Reading-based Screenshot Summaries for Supporting Awareness of Desktop Activities
- Computer ScienceAH
- 2016
The results show the utility of eye tracking data, and more specifically of using reading detection to determine key activities throughout the day to inform the creation of activity summaries that are more relevant and require less time to review.
Vertical Error Correction Using Classification of Transitions between Sequential Reading Segments
- Computer ScienceJ. Inf. Process.
- 2017
A method for more accurate mapping by first taking adjacent horizontally progressive fixations as segments, and then classifying the segments into six classes using a random forest classifier achieves 87% mapping accuracy.
References
SHOWING 1-10 OF 15 REFERENCES
Towards robust gaze-based objective quality measures for text
- Computer ScienceETRA
- 2012
This paper finds that the amount of regression targets, the reading-to-skimming ratio, reading speed and reading count are the most discriminative features to distinguish very comprehensible from barely comprehensible text passages.
A robust realtime reading-skimming classifier
- Computer ScienceETRA
- 2012
A method for constructing and training a classifier that is able to robustly distinguish reading from skimming patterns and provides a robust, linear classification into the two classes read and skimmed is proposed.
The Wordometer -- Estimating the Number of Words Read Using Document Image Retrieval and Mobile Eye Tracking
- Computer Science2013 12th International Conference on Document Analysis and Recognition
- 2013
The Wordometer is introduced, a novel method to estimate the number of words a user reads using a mobile eye tracker and document image retrieval and it is believed the Wordometer can be used as a step counter for the information the authors read to make their knowledge life healthier.
Towards inferring language expertise using eye tracking
- Computer ScienceCHI Extended Abstracts
- 2013
These efforts detect the English skill level of a user and infer which words are difficult for them to understand and explain a method to spot difficult words.
Multimodal recognition of reading activity in transit using body-worn sensors
- Computer ScienceTAP
- 2012
This work demonstrates that the joint analysis of eye and body movements is beneficial for reading recognition and opens up discussion on the wider applicability of a multimodal recognition approach to other visual and physical activities.
Reading Activity Recognition Using an Off-the-Shelf EEG -- Detecting Reading Activities and Distinguishing Genres of Documents
- Computer Science2013 12th International Conference on Document Analysis and Recognition
- 2013
A new paradigm focusing on recognizing the activities and habits of users while they are reading is introduced, and evidence that reading and non-reading related activities can be separated over 3 users using 6 classes, perfectly separating reading from non- reading is presented.
Eye Movement Analysis for Activity Recognition Using Electrooculography
- Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence
- 2011
The work demonstrates the promise of eye-based activity recognition (EAR) and opens up discussion on the wider applicability of EAR to other activities that are difficult, or even impossible, to detect using common sensing modalities.
Recognition of visual memory recall processes using eye movement analysis
- Computer Science, PsychologyUbiComp '11
- 2011
It is shown that eye movement analysis is a promising approach to infer the cognitive context of a person and the key challenges for the real-world implementation of eye-based cognition-aware systems are discussed.
Gaze-Based Filtering of Relevant Document Segments
- Computer Science
- 2009
A reading and skimming detection method and several gaze-based measures for determining relevant passages while reading documents are described and an eye tracking user study is reported to examine the performance of the gaze- based measures in identifying relevant read document passages.
Automatic text detection and tracking in digital video
- Computer ScienceIEEE Trans. Image Process.
- 2000
This work presents algorithms for detecting and tracking text in digital video that implements a scale-space feature extractor that feeds an artificial neural processor to detect text blocks.