Luc E. Julia

Learn More
The design and development of the Open Agent Architecture (OAA)l system has focused on providing access to agentbased applications through an intelligent, cooperative, distributed, and multimodal agent-based user interfaces. The current multimodal interface supports a mix of spoken language, handwriting and gesture, and is adaptable to the user’s(More)
In this paper, we discuss how multiple input modalities may be combined to produce more natural user interfaces. To illustrate this technique, we present a prototype map-based application for a travel planning domain. The application is distinguished by a synergistic combination of handwriting, gesture and speech modalities; access to existing data sources(More)
enough in navigation tasks to consider working with teams of robots. Using SRI International’s open-agent architecture (OAA) and SAPHIRA robotcontrol system, we configured three physical robots and a set of software agents on the internet to plan and act in coordination. Users communicate with the robots using a variety of multimodal input: pen, voice, and(More)
Full-motion video has inherent advantages over still imagery for characterizing events and movement. Military and intelligence analysts currently view live video imagery from airborne and ground-based video platforms, but few tools exist for efficient exploitation of the video and its accompanying metadata. In pursuit of this goal, SRI has developed MVIEWS,(More)
This paper describes a prototype application which combines speaker identification technology and an agent architecture to provide userdefinable monitors for incoming voicemail messages. Through a Webdistributable Java user interface, the user may enter requests by using spoken or typed natural language. Multiple distributed agents process the requests,(More)
Introduction The space around us contains information of different types: local, contextual and global. By enhancing in an unobtrusive way a person’s interaction with the space they live in, we aim to supplement this information both in a proactive way according to a user’s interests and tasks, and through “natural” requests. In the CANOES project, we are(More)
In this paper we present SRI's vision of the humanmachine interface for a car environment. This interface leverages our work in human-computer interaction, speech, speaker and gesture recognition, natural language understanding, and intelligent agents architecture. We propose a natural interface that allows the driver to interact with the navigation system,(More)
In 1994, inspired by a Wizard of Oz (WOZ) simulation experiment, we developed a working prototype of a system that enables users to interact with a map display through synergistic combinations of pen and voice. To address many of the issues raised by multimodal fusion, our implementation employed a distributed multi-agent framework to coordinate parallel(More)