Luc E. Julia

Learn More
In this paper, we discuss how multiple input modalities may be combined to produce more natural user interfaces. To illustrate this technique, we present a prototype map-based application for a travel planning domain. The application is distinguished by a synergistic combination of handwriting , gesture and speech modalities; access to existing data sources(More)
The design and development of the Open Agent Architecture (OAA)l system has focused on providing access to agent-based applications through an intelligent, cooperative, distributed , and multimodal agent-based user interfaces. The current multimodal interface supports a mix of spoken language , handwriting and gesture, and is adaptable to the user's(More)
Indoor mobile robots are becoming reliable enough in navigation tasks to consider working with teams of robots. Using SRI International's open-agent architecture (OAA) and SAPHIRA robot-control system, we configured three physical robots and a set of software agents on the inter-net to plan and act in coordination. Users communicate with the robots using a(More)
In 1994, inspired by a Wizard of Oz (WOZ) simulation experiment, we developed a working prototype of a system that enables users to interact with a map display through synergistic combinations of pen and voice. To address many of the issues raised by multimodal fusion, our implementation employed a distributed multi-agent framework to coordinate parallel(More)
We discuss ongoing work investigating how humans interact with multimodal systems, focusing on how successful reference to objects and events is accomplished. We describe an implemented multimodal travel guide application being employed in a set of Wizard of Oz experiments from which data about user interactions is gathered. We ooer a preliminary analysis(More)