Learn More
OmniTouch is a wearable depth-sensing and projection system that enables interactive multitouch applications on everyday surfaces. Beyond the shoulder-worn system, there is no instrumentation of the user or environment. Foremost, the system allows the wearer to use their hands, arms and legs as graphical, interactive surfaces. Users can also transiently(More)
Instrumented with multiple depth cameras and projectors, LightSpace is a small room installation designed to explore a variety of interactions and computational strategies related to interactive displays and the space that they inhabit. LightSpace cameras and projectors are calibrated to 3D real world coordinates, allowing for projection of graphics(More)
We describe techniques for direct pen+touch input. We observe people's manual behaviors with physical paper and notebooks. These serve as the foundation for a prototype Microsoft Surface application, centered on note-taking and scrapbooking of materials. Based on our explorations we advocate a division of labor between pen and touch: <i>the pen writes,(More)
Sphere is a multi-user, multi-touch-sensitive spherical display in which an infrared camera used for touch sensing shares the same optical path with the projector used for the display. This novel configuration permits: (1) the enclosure of both the projection and the sensing mechanism in the base of the device, and (2) easy 360-degree access for multiple(More)
We present ShadowGuides, a system for in-situ learning of multi-touch and whole-hand gestures on interactive surfaces. ShadowGuides provides on-demand assistance to the user by combining visualizations of the user's current hand posture as interpreted by the system (feedback) and available postures and completion paths necessary to finish the gesture(More)
IllumiRoom is a proof-of-concept system that augments the area surrounding a television with projected visualizations to enhance traditional gaming experiences. We investigate how projected visualizations in the periphery can negate, include, or augment the existing physical environment and complement the content displayed on the television screen.(More)
Alex Olwal Hrvoje Benko Steven Feiner Department of Computer Science, Columbia University 500 W. 120 St. 450 CS Building, New York, NY {aolwal,benko,feiner}@cs.columbia.edu Abstract We introduce a set of statistical geometric tools designed to identify the objects being manipulated through speech and gesture in a multimodal augmented reality system.(More)
Multiple-monitor computer configurations significantly increase the distances that users must traverse with the mouse when interacting with existing applications, resulting in increased time and effort. We introduce the Multi-Monitor Mouse (M3) technique, which virtually simulates having one mouse pointer per monitor when using a single physical mouse(More)
Multi-touch interfaces allow users to translate, rotate, and scale digital objects in a single interaction. However, this freedom represents a problem when users intend to perform only a subset of manipulations. A user trying to scale an object in a print layout program, for example, might find that the object was also slightly translated and rotated,(More)