• Publications
  • Influence
View-based and modular eigenspaces for face recognition
TLDR
A modular eigenspace description technique is used which incorporates salient features such as the eyes, nose and mouth, in an eigenfeature layer, which yields higher recognition rates as well as a more robust framework for face recognition.
Using GPS to learn significant locations and predict movement across multiple users
TLDR
This work presents a system that automatically clusters GPS data taken over an extended period of time into meaningful locations at multiple scales and incorporates these locations into a Markov model that can be consulted for use with a variety of applications in both single-user and collaborative scenarios.
Energy scavenging for mobile and wireless electronics
TLDR
This article presents a whirlwind survey through energy harvesting, spanning historic and current developments, as low-power electronics, wireless standards, and miniaturization conspire to populate the world with sensor networks and mobile devices.
Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video
We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the user's unadorned hands. The first
The Aware Home: A Living Laboratory for Ubiquitous Computing Research
TLDR
The Aware Home project is introduced and some of the technology-and human-centered research objectives in creating the Aware Home are outlined, to create a living laboratory for research in ubiquitous computing for everyday activities.
Real-time American Sign Language recognition from video using hidden Markov models
TLDR
A real-time HMM-based system for recognizing sentence level American Sign Language (ASL) which attains a word accuracy of 99.2% without explicitly modeling the fingers.
Visual Recognition of American Sign Language Using Hidden Markov Models.
TLDR
Using hidden Markov models (HMM's), an unobstrusive single view camera system is developed that can recognize hand gestures, namely, a subset of American Sign Language (ASL), achieving high recognition rates for full sentence ASL using only visual cues.
Human-Powered Wearable Computing
  • T. Starner
  • Computer Science
    IBM Syst. J.
  • 1 September 1996
TLDR
This paper explores the possibility of harnessing the energy expended during the user's everyday actions to generate power for his or her computer, thus eliminating the impediment of batteries.
The Gesture Pendant: A Self-illuminating, Wearable, Infrared Computer Vision System for Home Automation Control and Medical Monitoring
TLDR
A wearable device for control of home automation systems via hand gestures that can be used by those with loss of vision, motor skills, and mobility by combining other sources of context with the pendant, which can reduce the number and complexity of gestures while maintaining functionality.
Learning Significant Locations and Predicting User Movement with GPS
TLDR
This work presents a system that automatically clusters GPS data taken over an extended period of time into meaningful locations at multiple scales and incorporates these locations into a Markov model that can be consulted for use with a variety of applications in both single–user and collaborative scenarios.
...
...