Laurent Bonnaud

Learn More
In this paper we focus on the software design of a multimodal driving simulator that is based on both multimodal driver’s focus of attention detection as well as driver’s fatigue state detection and prediction. Capturing and interpreting the driver’s focus of attention and fatigue state is based on video data (e.g., facial expression, head movement, eye(More)
Surveillance systems depend greatly on the robustness and availability of the video streams. The cameras must deliver reliable streams from an angle corresponding to the correct viewpoint. In other words, the field of view and video quality must remain unchanged after the initial installation of a surveillance camera. The paper proposes an approach to(More)
This paper presents various classifiers results from a system that can automatically recognize four different static human body postures in video sequences. The considered postures are standing, sitting, squatting, and lying. The three classifiers considered are a naı̈ve one and two based on the belief theory. The belief theory-based classifiers use either(More)
This paper deals with the problem of the automated classification of cued speech gestures. Cued speech is a specific gesture language (different from the sign language) used for communication between deaf people and other people. It uses only 8 different hand configurations. The aim of this work is to apply a simple classifier on 3 images data sets, in(More)
This paper presents a driver simulator, which takes into account the information about the user’s state of mind (level of attention, fatigue state, stress state). The user’s state of mind analysis is based on video data and biological signals. Facial movements such as eyes blinking, yawning, head rotations, etc., are detected on video data: they are used in(More)
This paper introduces a video object segmentation algorithm developed in the context of the European project Art.live1 where constraints on the quality of segmentation and the processing rate (at least 10 images/second) are required. In order to obtain a fine segmentation (no blocking effect, boundaries precision, temporal stability without flickering), the(More)
The problem of multiple people detection in monocular video streams is addressed. The proposed method involves a human model based on skin color and foreground information. Robustness to local motion of background and global color changes is achieved by modeling images as fields of color distributions, and robustly estimating temporal background global(More)
This paper presents a new temporal interpolation algorithm based on segmentation of images into polygonal regions undergoing aane motion. The goal of this work is to improve upon the block-based interpolation used in mpeg (B-frames). In the rst part, we brieey describe the region based framework and the temporal linking algorithm that jointly provide the(More)