Learn More
EASE (Estimation and Assessment of Substance Exposure) is a general model that may be used to predict workplace exposure to a wide range of substances hazardous to health. First developed in the early 1990s, it is now in its second Windows version. This paper provides a critical assessment of the utility and performance of the EASE model, and on the basis(More)
Humans easily recognize where another person is looking and often use this information for interspeaker coordination. We present a method based on three neural networks of the local linear map type which enables a computer to identify the head orientation of a user by learning from examples. One network is used for color segmentation, a second for(More)
We present a vision system for human-machine interaction based on a small wearable camera mounted on glasses. The camera views the area in front of the user, especially the hands. To evaluate hand movements for pointing gestures and to recognise object references, an approach to integrating bottom-up generated feature maps and top-down propagated(More)
A major goal for the realization of a new generation of intelligent robots is the capability of instructing work tasks by interactive demonstration. To make such a process efficient and convenient for the human user requires that both the robot and the user can establish and maintain a common focus of attention. We describe a hybrid architecture that(More)
The human eyes are always in action, they explore the environment every second we are awake. But what attracts our visual attention? In this paper we examine the eye movements of human subjects observing a breakfast scenario using an eye-tracking system. Moreover, we developed a hierarchical model consisting of three modules. This model is applied to the(More)
Many human{machine interfaces based on face gestures are strongly user-dependent. We want to overcome this limitation by using common facial features like eyes, nose and mouth for gaze recognition. In a rst step an adaptive color histogram segmentation method roughly determines the region of interest including the user's face. Within this region we then use(More)