author={Roel Vertegaal},
  journal={Communications of the ACM},
  pages={30 - 33}
  • Roel Vertegaal
  • Published 1 March 2003
  • Computer Science
  • Communications of the ACM
If there was a Moore's Law for user interfaces, it would state that the number of computers per user will double every two years. In the past four decades, we have moved from many users sharing a single mainframe computer through command line interfaces, to a single user with a personal computer using a graphical user interface (GUI). Today, increasing numbers of users are surrounded by multiple ubiquitous computing devices, such as BlackBerries, PDAs, and cell phones. As our devices connect to… 
Exploration of Techniques for Rapid Activation of Glanceable Information in Head-Worn Augmented Reality
This research explored the design of interaction techniques with which users can activate virtual information sources in AR in the context of Glanceable AR, in which virtual information resides at the periphery of the user’s view.
Looking for Info: Evaluation of Gaze Based Information Retrieval in Augmented Reality
An empirical study comparing gaze-adaptive to an always-on interface in tasks that vary focus between reality and virtual content and finds most participants prefer the gaze- Adaptive UI and find it less distracting.
Empirical Evaluation of Gaze-enhanced Menus in Virtual Reality
This work investigates how eye gaze input affords exploiting the attention shifts to enhance the interaction with handheld menus, and assesses three techniques for menu selection: dwell time, gaze button, and cursor, which represents a different multimodal balance between gaze and manual input.
A View on the Viewer: Gaze-Adaptive Captions for Videos
The results show that viewers with less experience with captions prefer the gaze-adaptive methods as they assist them in reading, and gaze distributions resulting from the methods are closer to natural viewing behavior compared to the traditional approach.
StARe: Gaze-Assisted Face-to-Face Communication in Augmented Reality
This research explores the use of eye-tracking during Augmented Reality - supported conversations and proposes using gaze that allows users to gradually reveal information on demand, contributing to a better understanding of the intricate balance between informative AR and information overload.
Eye, Head and Torso Coordination During Gaze Shifts in Virtual Reality
A study of gaze shifts in virtual reality aimed to address the gap and inform design, and argue to treat gaze as multimodal input, and eye, head and body movement as synergetic in interaction design.
Multimodal Driver Interaction with Gesture, Gaze and Speech
The proposed plan includes different techniques based on artificial neural networks for the fusion of the camera-based modalities (gaze, head and gesture) and combines features extracted from speech with the fusion algorithm to determine the intent of the driver.


Roel Vertegaal ( is a professor of humancomputer interaction and director of the Human Media Lab at Queen's University