Learn More
Fig. 1. Overview of our system and the associated coordinate systems. Abstract—We present an activity-recognition system for assisted living applications and smart homes. While existing systems tend to rely on expensive computation of comparatively large-dimension data sets, ours leverages information from a small number of fundamentally different sensor(More)
—The ability to localize and identify multiple people is paramount to the inference of high-level activities for informed decision-making. In this paper, we describe the PEM-ID system, which uniquely identifies people tagged with accelerometer nodes in the video output of preinstalled infrastructure cameras. For this, we introduce a new distance measure(More)
  • Gershon Dublon, Laurel S Pardue, Brian Mayton, Noah Swartz, Nicholas Joliat, Patrick Hurst +1 other
  • 2011
— We present DoppelLab, an immersive sensor data browser built on a 3-d game engine. DoppelLab unifies independent sensor networks and data sources within the spatial framework of a building. Animated visualizations and sonifications serve as representations of real-time data within the virtual space.
We propose a system to identify people in a sensor network. The system fuses motion information measured from wearable accelerometer nodes with motion traces of each person detected by a camera node. This allows people to be uniquely identified with the IDs the accelerometer-node that they wear, while their positions are measured using the cameras. The(More)
In this paper, we present ListenTree, an audio-haptic display embedded in the natural environment. A visitor to our installation notices a faint sound appearing to emerge from a tree, and might feel a slight vibration under their feet as they approach. By resting their head against the tree, they are able to hear sound through bone conduction. To create(More)
— We present TRUSS, or Tracking Risk with Ubiquitous Smart Sensing, a novel system that infers and renders safety context on construction sites by fusing data from wearable devices, distributed sensing infrastructure, and video. Wearables stream real-time levels of dangerous gases, dust, noise, light quality, altitude, and motion to base stations that(More)
In this paper we present a vision for scalable indoor and outdoor auditory augmented reality (AAR), as well as HearThere, a wearable device and infrastructure demonstrating the feasibility of that vision. HearThere preserves the spatial alignment between virtual audio sources and the user's environment, using head tracking and bone conduction headphones to(More)
We present Patchwerk, a networked synthesizer module with tightly coupled web browser and tangible interfaces. Patch-werk connects to a pre-existing modular synthesizer using the emerging cross-platform HTML5 WebSocket standard to enable low-latency, high-bandwidth, concurrent control of analog signals by multiple users. Online users control physical(More)
The tongue is known to have an extremely dense sensing resolution, as well as an extraordinary degree of neuroplasticity, the ability to adapt to and internalize new input. Research has shown that electro-tactile tongue displays paired with cameras can be used as vision prosthetics for the blind or visually impaired; users quickly learn to read and navigate(More)
Responding to rapid growth in sensor network deployments that outpaces research efforts to understand or relate the new data streams, this thesis presents a collection of interfaces to sensor network data that encourage open-ended browsing while emphasizing saliency of representation. These interfaces interpret, visualize, and communicate context from(More)