— We present a multimodal system for creating, modifying and commanding groups of robots from a population. Extending our previous work on selecting an individual robot from a population by face engagement, we show that we can dynamically create groups of a desired number of robots by speaking the number we desire, e.g. " You three " , and looking at the… (More)
— We describe a system whereby multiple humans and mobile robots interact robustly using a combination of sensing and signalling modalities. Extending our previous work on selecting an individual robot from a population by face-engagement, we show that reaching toward a robot-a specialization of pointing-can be used to designate a particular robot for… (More)
Some format issues inherent in the e-media version may also appear in this print version.
We present a multi-modal multi-robot interaction whereby a user can identify an individual or a group of robots using haptic stimuli, and name them using a voice command (e.g."<i>You two are green</i>"). Subsequent commands can be addressed to the same robot(s) by name (e.g. "<i>Green! Take off</i>!"). We demonstrate this as part of a real-world integrated… (More)
We present an integrated human-robot interaction system that enables a user to select and command a team of two Unmanned Aerial Vehicles (UAV) using voice, touch, face engagement and hand gestures. This system integrates multiple human [multi]-robot interaction interfaces as well as a navigation and mapping algorithm in a coherent semi-realistic scenario.… (More)