Eduardo Velloso

Learn More
Particularly in sports or physical rehabilitation, users have to perform body movements in a specific manner for the exercises to be most effective. It remains a challenge for experts to specify how to perform such movements so that an automated system can analyse further performances of it. In a user study with 10 participants we show that experts'(More)
During the last 5 years, research on Human Activity Recognition (HAR) has reported on systems showing good overall recognition performance. As a consequence, HAR has been considered as a potential technology for ehealth systems. Here, we propose a machine learning based HAR classifier. We also provide a full experimental description that contains the HAR(More)
We introduce Orbits, a novel gaze interaction technique that enables hands-free input on smart watches. The technique relies on moving controls to leverage the smooth pursuit movements of the eyes and detect whether and at which control the user is looking at. In Orbits, controls include targets that move in a circular trajectory in the face of the watch,(More)
Research on activity recognition has traditionally focused on discriminating between different activities, i.e. to predict which activity was performed at a specific point in time. The quality of executing an activity, the how (well), has only received little attention so far, even though it potentially provides useful information for a large variety of(More)
In this paper, we present PathSync, a novel, distal and multi-user mid-air gestural technique based on the principle of rhythmic path mimicry; by replicating the movement of a screen-represented pattern with their hand, users can intuitively interact with digital objects quickly, and with a high level of accuracy. We present three studies that each(More)
We characterise foot movements as input for seated users. First, we built unconstrained foot pointing performance models in a seated desktop setting using ISO 9241-9-compliant Fitts’s Law tasks. Second, we evaluated the effect of the foot and direction in one-dimensional tasks, finding no effect of the foot used, but a significant effect of the direction in(More)
Experiencing Virtual Reality in domestic and other uncontrolled settings is challenging due to the presence of physical objects and furniture that are not usually defined in the Virtual Environment. To address this challenge, we explore the concept of Substitutional Reality in the context of Virtual Reality: a class of Virtual Environments where every(More)
In this paper we present an exploratory work on the use of foot movements to support fundamental 3D interaction tasks. Depth cameras such as the Microsoft Kinect are now able to track users’ motion unobtrusively, making it possible to draw on the spatial context of gestures and movements to control 3D UIs. Whereas multitouch and mid-air hand gestures have(More)
We describe the design and implementation of Arcade+, a custom-built arcade machine augmented with modern input devices to deploy and evaluate novel multi-modal interaction techniques for gaming. Arcade+ has gaze, feet, mid-air gestures, and multi touch sensing capabilities, as well as traditional arcade controls. Our modular design also allows for(More)
Eye tracking offers many opportunities for direct device control in smart environments, but issues such as the need for calibration and the Midas touch problem make it impractical. In this paper, we propose <i>AmbiGaze</i>, a smart environment that employs the animation of targets to provide users with direct control of devices by gaze only through smooth(More)