Hand Gesture Recognition for Real Time Human Machine Interaction System

@article{Sonwalkar2015HandGR,
  title={Hand Gesture Recognition for Real Time Human Machine Interaction System},
  author={Poonam Sonwalkar and Tanuja Sakhare and Ashwini R. Patil and Sonal A. Kale and Nutan Maharashtra and Bhavana S. Pansare},
  journal={international journal of engineering trends and technology},
  year={2015},
  volume={19},
  pages={262-264}
}
Real Time Human-machine Interaction system using hand gesture Recognition to handle the mouse event , media player , image viewer .Users have to repeat same mouse and keyboard actions, inducing waste of time. Gestures have long been considered as an interaction technique that can potentially deliver more natural. A fast gesture recognition scheme is proposed to be an interface for the human-machine interaction (HMI) of systems. The system presents some low- complexity algorithms and gestures to… 

Figures from this paper

Hand recognition using depth cameras
TLDR
This paper is a survey of the literature on hand position and gesture recognition with the use of depth cameras and it is noticeable that reviewed papers focus on the recognition of one-handed gestures and their classification among a finite set of gestures.
Hand Recognition Using Depth Cameras
TLDR
A survey of the literature on hand position and gesture recognition with the use of depth cameras and the lack of a standardized set of tests and the diversity of hardware leaves unclear the extent to which these would prove effective with low-cost hardware.
Hand Recognition Using Depth Cameras Reconocimiento De Manos Usando Cámaras De Profundidad
TLDR
A survey of the literature on hand position and gesture recognition with the use of depth cameras and the lack of a standardized set of tests and the diversity of hardware leaves unclear the extent to which these would prove effective with low-cost hardware.
Review on recent Computer Vision Methods for Human Action Recognition
TLDR
This work aims to address human presence by combining many options and utilizing a new RNN structure for activities, and focuses on recent advances in machine learning-assisted action recognition.
Human-Machine Interfaces for Robotic System Control
The paper reveals the design and development, from the concept to the experimental model, of a hardware and software of a rehabilitation robotic system, as well as the development of the
A View-invariant Skeleton Map with 3DCNN for Action Recognition
TLDR
This paper encode the spatial-temporal information of skeleton joint points sequences into a view-invariant skeleton map (VISM), and employ a 3D convolutional neural network (3DCNN) to exploit features from VISM for 3D action recognition.
HMI based multi-role Mechatronic Pick and Place Structure
TLDR
This paper has proposed to create a complete pick and place system, with a low price, able to reconfigure the entire mechatronic system by using an HMI interface, developed by the authors, through a minimal intervention on the adaptation of the z axis to the specific application.
Energy-Guided Temporal Segmentation Network for Multimodal Human Action Recognition
TLDR
A novel energy-guided temporal segmentation method is proposed here, and a multimodal fusion strategy is employed with the proposed segmentations method to construct an energy- guided temporal segmentations network (EGTSN).
Learning to recognise 3D human action from a new skeleton-based representation using deep convolutional neural networks
TLDR
A new skeleton-based representation for 3D action recognition in videos that outperforms previous state-of-the-art approaches while requiring less computation for training and prediction is introduced.
Learning to Recognize 3 D Human Action from A New Skeleton-based Representation Using Deep Convolutional Neural Networks
TLDR
A new skeletonbased representation for 3D action recognition in videos that outperforms previous state-of-the-art approaches whilst requiring less computation for training and prediction is introduced.
...
...

References

SHOWING 1-10 OF 20 REFERENCES
A Research Study of Hand Gesture Recognition Technologies and Applications for Human Vehicle Interaction
This paper describes the primary and secondary driving task together with Human Machine Interface (HMI) trends and issues which are driving automotive user interface designers to consider hand
Color-based hands tracking system for sign language recognition
  • K. Imagawa, Shan Lu, S. Igi
  • Computer Science
    Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition
  • 1998
TLDR
A real-time system which tracks the uncovered/unmarked hands of a person performing sign language using a Kalman filter and results indicate that the system is capable of tracking hands even while they are overlapping the face.
Simultaneous Tracking and Action Recognition using the PCA-HOG Descriptor
  • Wei-Lwun Lu, J. Little
  • Computer Science
    The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)
  • 2006
TLDR
This paper proposes to represent the athletes by the PCA-HOG descriptor, which can be computed by first transforming the athletes to the grids of Histograms of Oriented Gradient (HOG) descriptor and then project it to a linear subspace by Principal Component Analysis (PCA).
Human action detection via boosted local motion histograms
TLDR
This paper presents a novel learning method for human action detection in video sequences and shows how the proposed method enables learning efficient action detectors, and validates them on publicly available datasets.
Tracking and Recognizing Actions at a Distance
TLDR
This paper presents a template-based algorithm to track and recognize athlete’s actions in an integrated system using only visual information, and proposes to represent the athletes by the grids of Histograms of Oriented Gradient (HOG) descriptor.
Learning human actions via information maximization
  • Jingen Liu, M. Shah
  • Computer Science
    2008 IEEE Conference on Computer Vision and Pattern Recognition
  • 2008
TLDR
This paper presents a novel approach for automatically learning a compact and yet discriminative appearance-based human action model, and is the first to try the bag of video-words related approach on the multiview dataset.
Actions As Objects : A Novel Action Representation
TLDR
This paper proposes to model an action based on both the shape and the motion of the object performing the action, and generates STV by solving the point correspondence problem between consecutive frames using a two-step graph theoretical approach.
A 3-dimensional sift descriptor and its application to action recognition
TLDR
This paper uses a bag of words approach to represent videos, and presents a method to discover relationships between spatio-temporal words in order to better describe the video data.
A general approach to connected-component labeling for arbitrary image representations
TLDR
An improved and general approach to connected-component labeling of images is presented, and it is shown that when the algorithm is specialized to a pixel array scanned in raster order, the total processing time is linear in the number of pixels.
...
...