Gershon Dublon

Learn More
An increasingly common requirement of computer systems is to extract information regarding the people present in an environment. In this article, we provide a comprehensive, multi-disciplinary survey of the existing literature, focusing mainly on the extraction of five commonly needed spatiotemporal properties: namely presence, count, location, track and(More)
The ability to localize and identify multiple people is paramount to the inference of high-level activities for informed decision-making. In this paper, we describe the PEM-ID system, which uniquely identifies people tagged with accelerometer nodes in the video output of preinstalled infrastructure cameras. For this, we introduce a new distance measure(More)
We present an activity-recognition system for assisted living applications and smart homes. While existing systems tend to rely on expensive computation of comparatively largedimension data sets, ours leverages information from a small number of fundamentally different sensor measurements that provide context information pertaining the person's location,(More)
This work presents a method for 3D printing hair-like structures on both flat and curved surfaces. It allows a user to design and fabricate hair geometries that are smaller than 100 micron. We built a software platform to let users quickly define the hair angle, thickness, density, and height. The ability to fabricate customized hair-like structures not(More)
The tongue is known to have an extremely dense sensing resolution, as well as an extraordinary degree of neuroplasticity, the ability to adapt to and internalize new input. Research has shown that electro-tactile tongue displays paired with cameras can be used as vision prosthetics for the blind or visually impaired; users quickly learn to read and navigate(More)
We propose a system to identify people in a sensor network. The system fuses motion information measured from wearable accelerometer nodes with motion traces of each person detected by a camera node. This allows people to be uniquely identified with the IDs the accelerometer-node that they wear, while their positions are measured using the cameras. The(More)
In this paper, we present ListenTree, an audio-haptic display embedded in the natural environment. A visitor to our installation notices a faint sound appearing to emerge from a tree, and might feel a slight vibration under their feet as they approach. By resting their head against the tree, they are able to hear sound through bone conduction. To create(More)
In this paper we present a vision for scalable indoor and outdoor auditory augmented reality (AAR), as well as HearThere, a wearable device and infrastructure demonstrating the feasibility of that vision. HearThere preserves the spatial alignment between virtual audio sources and the user's environment, using head tracking and bone conduction headphones to(More)
We present TRUSS, or Tracking Risk with Ubiquitous Smart Sensing, a novel system that infers and renders safety context on construction sites by fusing data from wearable devices, distributed sensing infrastructure, and video. Wearables stream real-time levels of dangerous gases, dust, noise, light quality, altitude, and motion to base stations that(More)