Learn More
In this paper, we introduce a novel laser-based wide-area heads-up windshield display which is capable of actively interfacing with a human as part of a driver assistance system. The dynamic active display (DAD) is a unique prototype interface that presents safety-critical visual icons to the driver in a manner that minimizes the deviation of his or her(More)
We present a novel method for learning and tracking the pose of an articulated body by observing only its volumet-ric reconstruction. We propose a probabilistic technique that utilizes a multi-component Gaussian mixture model to describe the spatial distribution of voxels in a voxel image. Each component describes a segment or rigid body, and the collection(More)
Human-centric, pervasive computing environments, with integrated sensing, processing, networking, and displays, provide an appropriate framework for developing effective driver-assistance systems. Also essential when developing such systems are systematic efforts to understand and characterize driver behavior. In an attempt to make such a predictive(More)
Dynamic analysis of vehicle occupant posture is a key requirement in designing " smart airbag " systems. Vision based technology could enable the use of precise information about the occupant's size, posture, and in particular, position in making airbag deployment decisions. Novel sensory systems and algorithms need to be developed for capture, analysis and(More)
A multidisciplinary research effort at UCSD focuses on the design, development, and evaluation of novel computational frameworks for vehicle-based safety systems. The dynamic active display presents visual alerts to the driver based on the surrounding environment, vehicle dynamics, and driver's state as well as a driver-intent analysis and situational(More)
This paper presents a novel approach to recognizing driver activities using a multi-perspective (i.e., four camera views) multi-modal (i.e., thermal infrared and color) video-based system for robust and real-time tracking of important body parts. The multi-perspective characteristics of the system provides redundant trajectories of the body parts, while the(More)
We present a novel real-time computer-vision system that robustly discriminates which of the front-row seat occupants is accessing the infotainment controls. The knowledge of who is the user-that is, driver, passenger, or no one-can alleviate driver distraction and maximize the passenger infotainment experience. The system captures visible and near-infrared(More)
This paper presents a multi-perspective (i.e., four camera views) multi-modal (i.e., thermal infrared and color) video based system for robust and real-time 3D tracking of important body parts.The multi-perspective characteristics of the system provides 3Dtrajectory of the body parts, while the multi-modal characteristics of the system provides robustness(More)
An important goal in automotive user interface research is to predict a user's reactions and behaviors in a driving environment. The behavior of both drivers and passengers can be studied by analyzing eye gaze, head, hand, and foot movement, upper body posture, etc. In this paper, we focus on estimating head pose, which has been shown to be a good predictor(More)
With a panoramic view of the scene, a single omnidirectional camera can monitor the 360-degree surround of the vehicle or monitor the interior and exterior of the vehicle at the same time. We investigate problems associated with integrating driver assistance functionalities that have been designed for rectilinear cameras with a single omnidirectional camera(More)