• Corpus ID: 17600612

Applying the Wizard-of-Oz Technique to Multimodal Human-Robot Dialogue

  title={Applying the Wizard-of-Oz Technique to Multimodal Human-Robot Dialogue},
  author={Matthew Marge and Claire Bonial and Brendan Byrne and Taylor Cassidy and A. William Evans and Susan G. Hill and Clare R. Voss},
Our overall program objective is to provide more natural ways for soldiers to interact and communicate with robots, much like how soldiers communicate with other soldiers today. We describe how the Wizard-of-Oz (WOz) method can be applied to multimodal human-robot dialogue in a collaborative exploration task. While the WOz method can help design robot behaviors, traditional approaches place the burden of decisions on a single wizard. In this work, we consider two wizards to stand in for robot… 

Figures from this paper

Assessing Agreement in Human-Robot Dialogue Strategies: A Tale of Two Wizards
It is concluded that, without control sessions, the Wizard-of-Oz method would have been unlikely to achieve both the natural diversity of expression that comes with multiple wizards and a better protocol for modeling an automated system.
Laying Down the Yellow Brick Road: Development of a Wizard-of-Oz Interface for Collecting Human-Robot Dialogue
It is shown that the adaptation and refinement of a graphical user interface designed to facilitate a Wizard-of-Oz (WoZ) approach to collecting human-robot dialogue data and the fixed set of utterances and templates therein provide for a natural pace of dialogue with good coverage of the navigation domain.
A Classification-Based Approach to Automating Human-Robot Dialogue
A dialogue system based on statistical classification which was used to automate human-robot dialogue in a collaborative navigation domain and it is found that response accuracy is generally high, even with very limited training data.
Balancing Efficiency and Coverage in Human-Robot Dialogue Collection
This work shows that the Wizard-of-Oz approach enables the collection of useful training data for navigation-based HRI tasks, enabling more efficient targeted data collection, and improvements in natural language understanding using GUI-collected data.
Exploring Variation of Natural Human Commands to a Robot in a Collaborative Navigation Task
A Wizard-of-Oz study that simulates a robot's limited understanding, and collects dialogues where human participants build a progressively better mental model of the robot’s understanding, shows a general initial preference for including metric information in motion commands, but this decreased over time, suggesting changes in perception.
Human-Robot Dialogue and Collaboration in Search and Navigation
The corpus described here provides insight into the translation and interpretation a natural language instruction undergoes starting from verbal human intent, to understanding and processing, and ultimately, to robot execution.
Augmenting Abstract Meaning Representation for Human-Robot Dialogue
The design scheme presented here, though task-specific, is extendable for broad coverage of speech acts using AMR in future task-independent work.
A Two-Level Interpretation of Modality in Human-Robot Dialogue
A two-level annotation scheme for modality is presented that captures both content and intent, integrating a logic-based, semantic representation and a task-oriented, pragmatic representation that maps to the robot’s capabilities.
Dialogue-AMR: Abstract Meaning Representation for Dialogue
A schema that enriches Abstract Meaning Representation (AMR) in order to provide a semantic representation for facilitating Natural Language Understanding (NLU) in dialogue systems is described and an enhanced AMR that represents not only the content of an utterance, but the illocutionary force behind it, as well as tense and aspect is presented.
Graph-to-Graph Meaning Representation Transformations for Human-Robot Dialogue
A two-step NLU approach is established in which automatically-obtained AMR graphs of the input language are converted into in-domain meaning representation graphs augmented with tense, aspect, and speech act information, thereby bridging the gap from unconstrained natural language input to a fixed set of robot actions.


Finding the FOO: a pilot study for a multimodal interface
  • D. Perzanowski, Derek P. Brock, M. Skubic
  • Computer Science
    SMC'03 Conference Proceedings. 2003 IEEE International Conference on Systems, Man and Cybernetics. Conference Theme - System Security and Assurance (Cat. No.03CH37483)
  • 2003
A wizard-of-Oz pilot study with five participants who each collaborated with a robot on a search task in a separate room, using a subset of the multimodal interface that supports speech and gestural inputs.
Applying the Wizard-of-Oz framework to cooperative service discovery and configuration
  • A. Green, H. Huttenrauch, K. S. Eklundh
  • Computer Science
    RO-MAN 2004. 13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No.04TH8759)
  • 2004
This work describes how the Wizard-of-Oz framework can be applied to a service robotics scenario involving a collaborative service discovery and configuration multimodal dialogue for the robot.
Comparing Heads-up, Hands-free Operation of Ground Robots to Teleoperation
When operators used Heads-up, Hands-free Operation, which allows an operator to control a UGV using operator following behaviors and a gesture interface, they performed missions faster, they could recall their surroundings better, and they had a lower cognitive load than they did when they teleoperated the robot.
Wizard of Oz studies in HRI
This work systematically reviews how researchers conducted Wizard of Oz experiments published in the primary HRI publication venues from 2001 -- 2011 and proposes new reporting guidelines to aid future research.
Turn-Taking in Commander-Robot Navigator Dialog (Video Abstract)
Human-Robot Interactions in Future Military Operations
Military HRI Research Conducted Using a Scale MOUT Facility
  • Human-Robot Interactions in Future Military Operations, pp. 419–431, 2010.
  • 2010