Takumi Toyama

Learn More
Wearable eye trackers open up a large number of opportunities to cater for the information needs of users in today's dynamic society. Users no longer have to sit in front of a traditional desk-mounted eye tracker to benefit from the direct feedback given by the eye tracker about users' interest. Instead, eye tracking can be used as a ubiquitous interface in(More)
We present a new system that assists people’s reading activity by combining a wearable eye tracker, a seethrough head mounted display, and an image based document retrieval engine. An image based document retrieval engine is used for identification of the reading document, whereas an eye tracker is used to detect which part of the document the reader is(More)
In the last few years, the advancement of head mounted display technology and optics has opened up many new possibilities for the field of Augmented Reality. However, many commercial and prototype systems often have a single display modality, fixed field of view, or inflexible form factor. In this paper, we introduce Modular Augmented Reality (ModulAR), a(More)
Recognition of user activities is a key issue for context-aware computing. We present a method for recognition of user daily activities using gaze motion features and image-based visual features. Gaze motion features dominate for inferring the user's egocentric context whereas image-based visual features dominate for recognition of the environments and the(More)
Efficient text recognition has recently been a challenge for augmented reality systems. In this paper, we propose a system with the ability to provide translations to the user in real-time. We use eye gaze for more intuitive and efficient input for ubiquitous text reading and translation in head mounted displays (HMDs). The eyes can be used to indicate(More)
This paper describes a new prototypical application that is based on a head mounted mobile eye tracker in combination with content based image retrieval technology. The application, named “Museum Guide 2.0”, acts like an unintrusive personal guide of a visitor in a museum. When it detects that the user is watching a specific art object, it will provide(More)
Indoor navigation in emergency scenarios poses a challenge to evacuation and emergency support, especially for injured or physically encumbered individuals. Navigation systems must be lightweight, easy to use, and provide robust localization and accurate navigation instructions in adverse conditions. To address this challenge, we combine magnetic location(More)
We present a new augmented reality (AR) system for knowledge-intensive location-based expert work. The multimodal interaction system combines multiple on-body input and output devices: a speech-based dialogue system, a head-mounted augmented reality display (HMD), and a head-mounted eyetracker. The interaction devices have been selected to augment and(More)
Recognition of scene text using a hand-held camera is emerging as a hot topic of research. In this paper, we investigate the use of a head-mounted eye-tracker for scene text recognition. An eye-tracker detects the position of the user’s gaze. Using gaze information of the user, we can provide the user with more information about his region/object of(More)