Shokoofeh Pourmehr

Learn More
— We present a multimodal system for creating, modifying and commanding groups of robots from a population. Extending our previous work on selecting an individual robot from a population by face engagement, we show that we can dynamically create groups of a desired number of robots by speaking the number we desire, e.g. " You three " , and looking at the(More)
We describe a system whereby multiple humans and mobile robots interact robustly using a combination of sensing and signalling modalities. Extending our previous work on selecting an individual robot from a population by face-engagement, we show that reaching toward a robot - a specialization of pointing - can be used to designate a particular robot for(More)
We present a multi-modal multi-robot interaction whereby a user can identify an individual or a group of robots using haptic stimuli, and name them using a voice command (e.g."<i>You two are green</i>"). Subsequent commands can be addressed to the same robot(s) by name (e.g. "<i>Green! Take off</i>!"). We demonstrate this as part of a real-world integrated(More)
We present an integrated human-robot interaction system that enables a user to select and command a team of two Unmanned Aerial Vehicles (UAV) using voice, touch, face engagement and hand gestures. This system integrates multiple human [multi]-robot interaction interfaces as well as a navigation and mapping algorithm in a coherent semi-realistic scenario.(More)
We report the actions of untrained users when instructed to “ make the robot come to you”. The robot is a generic wheeled mobile robot located 8m away and is driven by the experimenter without the knowledge of the participant. The results show a variety of calls and gestures made to the robot, that changed over time. We observed two distinct behaviour(More)
We present a simple probabilistic framework for multimodal sensor fusion that allows a mobile robot to reliably locate and approach the most promising interaction partner among a group of people, in an uncontrolled environment. Our demonstration integrates three complementary sensor modalities, each of which detects features of nearby people. The output is(More)
We present a multimodal system for creating, modifying and commanding groups of robots from a population. Extending our previous work on selecting an individual robot from a population by face engagement, we show that we can dynamically create groups of a desired number of robots by speaking the number we desire, e.g. &#x201C;You three&#x201D;, and looking(More)
  • 1