Shokoofeh Pourmehr

Learn More
We present a multimodal system for creating, modifying and commanding groups of robots from a population. Extending our previous work on selecting an individual robot from a population by face-engagement, we show that we can dynamically create groups of a desired number of robots by speaking the number we desire, e.g. “You three”, and looking at the robots(More)
We describe a system whereby multiple humans and mobile robots interact robustly using a combination of sensing and signalling modalities. Extending our previous work on selecting an individual robot from a population by face-engagement, we show that reaching toward a robot - a specialization of pointing - can be used to designate a particular robot for(More)
We present a multi-modal multi-robot interaction whereby a user can identify an individual or a group of robots using haptic stimuli, and name them using a voice command (e.g."<i>You two are green</i>"). Subsequent commands can be addressed to the same robot(s) by name (e.g. "<i>Green! Take off</i>!"). We demonstrate this as part of a real-world integrated(More)
We present an integrated human-robot interaction system that enables a user to select and command a team of two Unmanned Aerial Vehicles (UAV) using voice, touch, face engagement and hand gestures. This system integrates multiple human [multi]-robot interaction interfaces as well as a navigation and mapping algorithm in a coherent semi-realistic scenario.(More)
We report the actions of untrained users when instructed to “ make the robot come to you”. The robot is a generic wheeled mobile robot located 8m away and is driven by the experimenter without the knowledge of the participant. The results show a variety of calls and gestures made to the robot, that changed over time. We observed two distinct behaviour(More)
We present a probabilistic multi-modal system for a robot to detect and approach the most promising interaction partner in a crowd of uninstrumented people outdoors. To achieve robust operation, the system integrates three multimodal percepts of humans and regulates robot’ behaviour to approach the location with highest probability of an engaged human. A(More)
We present a novel multimodal system for creating and commanding groups of robots from a population. Extending our previous work on dynamically creating groups of robots using face engagement and voice commands, we show that we can identify an individual or a group of robots using haptic stimuli, and name them using a voice command (e.g. “You two, join(More)
We present a simple probabilistic framework for multimodal sensor fusion that allows a mobile robot to reliably locate and approach the most promising interaction partner among a group of people, in an uncontrolled environment. Our demonstration integrates three complementary sensor modalities, each of which detects features of nearby people. The output is(More)