Corpus ID: 237431256

Fine-grained Hand Gesture Recognition in Multi-viewpoint Hand Hygiene

  title={Fine-grained Hand Gesture Recognition in Multi-viewpoint Hand Hygiene},
  author={Huy Q. Vo and Tuong KL. Do and Vi C. Pham and Duy Nguyen and An T. Duong and Quang D. Tran},
This paper contributes a new high-quality dataset for hand gesture recognition in hand hygiene systems, named “MFH”. Generally, current datasets are not focused on: (i) fine-grained actions; and (ii) data mismatch between different viewpoints, which are available under realistic settings. To address the aforementioned issues, the MFH dataset is proposed to contain a total of 731147 samples obtained by different camera views in 6 non-overlapping locations. Additionally, each sample belongs to… Expand

Figures and Tables from this paper


HandSense: smart multimodal hand gesture recognition based on deep neural networks
The HandSense is proposed, a new system for the multi-modal HGR based on a combined RGB and depth cameras to improve the fine-grained action descriptors as well as preserve the ability to perform general action recognition. Expand
Hand hygiene monitoring based on segmentation of interacting hands with convolutional networks
An optical assistance system for monitoring the hygienic hand disinfection procedure which is based on machine learning, and a novel approach in automatically labeling recorded data using colored gloves and a color video stream that is registered to the depth stream. Expand
FS-HGR: Few-Shot Learning for Hand Gesture Recognition via Electromyography
The main objective of this work is to design a modern DNN-based gesture detection model that relies on minimal training data while providing high accuracy and is motivated by the recent advances in Deep Neural Networks and their widespread applications in human-machine interfaces. Expand
A Prototype-Based Generalized Zero-Shot Learning Framework for Hand Gesture Recognition
An end-to-end prototype-based GZSL framework for hand gesture recognition which consists of a prototype- based detector that learns gesture representations and determines whether an input sample belongs to a seen or unseen category and a zero-shot label predictor which takes the features of unseen classes as input and outputs predictions. Expand
Automated Quality Assessment of Hand Washing Using Deep Learning
The preliminary results show that using pre-trained neural network models such as MobileNetV2 and Xception for the task, it is possible to achieve >64 % accuracy in recognizing the different washing movements. Expand
Synthetic Video Generation for Robust Hand Gesture Recognition in Augmented Reality Applications
The goal of this work is to introduce a framework capable of generating photo-realistic videos that have labelled hand bounding box and fingertip that can help in designing, training, and benchmarking models for hand-gesture recognition in AR/VR applications. Expand
LE-HGR: A Lightweight and Efficient RGB-Based Online Gesture Recognition Network for Embedded AR Devices
A lightweight and computationally efficient HGR framework, namely LE-HGR, is proposed to enable real-time gesture recognition on embedded devices with low computing power and is able to achieve state-of-the-art accuracy with significantly reduced computational cost. Expand
A vision-based system for automatic hand washing quality assessment
A novel approach that uses a computer vision system to measure the user’s hands motions to ensure that the hand washing guidelines are followed, showing an accuracy close to that of human experts. Expand
Automatic detection of hand hygiene using computer vision technology
Computer vision monitoring has the potential to provide a more complete appraisal of hand hygiene activity in hospitals than the current gold-standard given its ability for continuous coverage of a unit in space and time. Expand
A Neural Network Based on SPD Manifold Learning for Skeleton-Based Hand Gesture Recognition
A new neural network based on SPD manifold learning for skeleton-based hand gesture recognition, validated on two challenging datasets and shows state-of-the-art accuracies on both datasets. Expand