Two-handed gesture recognition and fusion with speech to command a robot

@article{Burger2012TwohandedGR,
  title={Two-handed gesture recognition and fusion with speech to command a robot},
  author={B. Burger and I. Ferran{\'e} and F. Lerasle and G. Infantes},
  journal={Autonomous Robots},
  year={2012},
  volume={32},
  pages={129-147}
}
Assistance is currently a pivotal research area in robotics, with huge societal potential. Since assistant robots directly interact with people, finding natural and easy-to-use user interfaces is of fundamental importance. This paper describes a flexible multimodal interface based on speech and gesture modalities in order to control our mobile robot named Jido. The vision system uses a stereo head mounted on a pan-tilt unit and a bank of collaborative particle filters devoted to the upper human… Expand
Controlling a Mobile Robot with Natural Commands based on Voice and Gesture
TLDR
A real-time system for the control of a small mobile robot using combined audio (speech) and video (gesture) commands and comprehensive analysis of the performance in an indoor, reverberant environment is provided. Expand
Gesture-based human-robot interaction for human assistance in manufacturing
TLDR
A gesture-based HRI framework in which a robot assists a human co-worker delivering tools and parts, and holding objects to/for an assembly operation, and a parameterization robotic task manager (PRTM), in which the co- worker selects/validates robot options using gestures are proposed. Expand
How can human communicate with robot by hand gesture?
  • Thi-Thanh-Hai Tran
  • Computer Science
  • 2013 International Conference on Computing, Management and Telecommunications (ComManTel)
  • 2013
TLDR
This paper proposed using both techniques for hand gesture recognition that reduces significantly the computational time in comparison with the traditional use of cascaded Adaboost classifier and shape analysis techniques. Expand
A proposed gesture set for the control of industrial collaborative robots
TLDR
This work proposes, implements and evaluates a set of common ground rules for automatic recognition of human-Robot Interaction gestures, and presents the design constraints between different groups of requirements and a technical solution forautomatic recognition using imaging hardware. Expand
Interacting with robots via speech and gestures, an integrated architecture
TLDR
The main purpose of this work is to develop a general framework for multimodal human-robot communication, which allows users to interact with robots using speech and gestures, integrated into unique commands. Expand
Advanced Gesture and Pose Recognition Algorithms Using Computational Intelligence and Microsoft KINECT Sensor
TLDR
This research work suggests one kind of approach in developing a natural human–robot interface that will be used for control of four wheels differentially steered mobile robot and showed satisfactory results in small classification error, simple human operator training and user comfort. Expand
Gesture-based robot control with variable autonomy from the JPL BioSleeve
TLDR
This paper presents a new gesture-based human interface for natural robot control that enables, for example, supervisory point-to-goal commands, virtual joystick for guarded teleoperation, and high degree of freedom mimicked manipulation, all from a single device. Expand
Pointing Estimation for Human-Robot Interaction Using Hand Pose, Verbal Cues, and Confidence Heuristics
TLDR
The ability of a robot to determine pointing direction using data collected from a Microsoft Kinect camera is demonstrated and the pose of the index finger can act as a fall-back for when full-body pose information is not available. Expand
User-defined gestures for controlling primitive motions of an end effector
TLDR
The hands were the parts of the body used most often for gesture articulation even when the participants were holding tools and objects with both hands, and the participants expected better recognition performance for gestures that were easy to think of and perform. Expand
Fusing Hand Postures and Speech Recognition for Tasks Performed by an Integrated Leg–Arm Hexapod Robot
Hand postures and speech are convenient means of communication for humans and can be used in human–robot interaction. Based on structural and functional characteristics of our integrated leg-armExpand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 69 REFERENCES
A Gesture Based Interface for Human-Robot Interaction
TLDR
A gesture interface for the control of a mobile robot equipped with a manipulator uses a camera to track a person and recognize gestures involving arm motion and is combined with the Viterbi algorithm for the recognition of gestures defined through arm motion. Expand
Visual recognition of pointing gestures for human-robot interaction
TLDR
This system aims at applications in the field of human-robot interaction, where it is important to do run-on recognition in real-time to allow for robot egomotion and not to rely on manual initialization. Expand
Multimodal human-robot interaction in an assistive technology context
In this paper, we present a prototype robotic system that captures, processes and fuses speech, vision and laser-depth data to more accurately interpret and perform simple tasks in a domesticExpand
Recognizing complex, parameterized gestures from monocular image sequences
TLDR
A system that is able to spot and recognize complex, parameterized gestures from monocular image sequences by using few, expressive features extracted out of this compact representation as input to hidden Markov models (HMMs). Expand
Camera-based gesture recognition for robot control
  • A. Corradini, H. Groß
  • Computer Science
  • Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium
  • 2000
TLDR
Four architectures for gesture-based interaction between a human being and an autonomous mobile robot using the above mentioned techniques or a hybrid combination of them are presented. Expand
HMM-Based Gesture Recognition for Robot Control
TLDR
A hidden Markov model (HMM) which takes a continuous stream as an input and can automatically segments and recognizes human gestures is used, which verified the feasibility and validity of the proposed system. Expand
Natural human-robot interaction using speech, head pose and gestures
TLDR
The authors' systems for spontaneous speech recognition, multimodal dialogue processing and visual perception of a user, which includes the recognition of pointing gestures as well as the Recognition of a person's head orientation, are presented. Expand
Mutual assistance between speech and vision for human-robot interface
TLDR
This paper presents a user interface for a service robot that can bring the objects asked by the user and demonstrates promising results through experiments. Expand
Vision-based hand pose estimation: A review
TLDR
A literature review on the second research direction, which aims to capture the real 3D motion of the hand, which is a very challenging problem in the context of HCI. Expand
Real-Time Vision Based Gesture Recognition for Human-Robot Interaction
TLDR
This paper uses the simple queue matching method as a recognition method and applies the system as an animation system, which can select subject effectively and recognize gesture in multiple people environment. Expand
...
1
2
3
4
5
...