Spatial Intention Recognition Using Optimal Margin Classifiers

Abstract

Introduction The high costs of human spaceflight operations favor large investments to optimize astronauts' time usage during extravehicular activity. These have included extensive expenditures in training, tool development, and spacecraft design for serviceability. However, astronauts' space suits themselves still encumber more than aid, a focus of several current research programs. Potential improvements face tight integration between suits and astronaut activities, resulting in many mechanical and computational challenges. One major area of work aims to alleviate the difficulties of conducting precise or prolonged movements within a pressurized garment. Powered prosthetic assistance may provide a solution to this problem, but creates key operational challenges. Standard digital or verbal user command interfaces may prove incompatible with such devices, limited by low bandwidth and nonintuitive control structures. Tactile control using, for example, hand or finger gestures seems far more suitable for controlling mechanical effectors, providing high speed and intuitive spatial relationships between command signals and desired actions. Flexibility and robustness in controllers like these will likely require personalized command recognition tailored to individual astronauts. The need for speed and natural facility will make this capability even more indispensable than in, say, speech recognition. Command recognition systems should dynamically adjust their interpretation rules as training data is accumulated, improving their precision and following long-term trends as astronauts develop their working behaviors throughout a career's worth of extravehicular activity. In this project, we propose a relatively simple gesture-based spatial command recognition system as an analog to more advanced systems suitable for augmenting extravehicular activities with robotic assistance. We aim initially to achieve discrete pattern recognition, with a possible extension to continuous parameter spaces, which may ultimately find favor in many spatial applications. We propose a software agent capable of identifying spatially motivated commands among a finite set indicated by short two-dimensional gestures within the continuous movement stream of a pointing device such as a computer mouse. The agent will construct optimized interpretation rules based on training data sets corresponding to single human users over a period of time, with identifying rules adjusted dynamically during further use. The system may be extended to allow command spaces parameterized by continuous variables. It may also allow users to refine agent interpretations post facto by providing optional explicit clarification after initial training is completed.

Cite this paper

@inproceedings{Coffee2005SpatialIR, title={Spatial Intention Recognition Using Optimal Margin Classifiers}, author={Thomas Coffee and Shuonan Dong and Shen Qu}, year={2005} }