Deep learning-based artificial vision for grasp classification in myoelectric hands

@article{Ghazaei2017DeepLA,
  title={Deep learning-based artificial vision for grasp classification in myoelectric hands},
  author={Ghazal Ghazaei and Ali Alameer and Patrick Degenaar and Graham Morgan and Kianoush Nazarpour},
  journal={Journal of Neural Engineering},
  year={2017},
  volume={14}
}
Objective. Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. Approach. We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual… 

Computer Vision-Based Grasp Pattern Recognition With Application to Myoelectric Control of Dexterous Hand Prosthesis

TLDR
A novel computer vision-based classification method assorting objects into different grasp patterns is proposed and can be applied in the autonomous control of the multi-fingered prosthetic hand, as it can help users rapidly complete “reach-and-pick up” tasks on various daily objects with low demand on the myoelectric control.

Improving Robotic Hand Prosthesis Control With Eye Tracking and Computer Vision: A Multimodal Approach Based on the Visuomotor Behavior of Grasping

TLDR
It is suggested that the robustness of hand prosthesis control based on grasp-type recognition can be significantly improved with the inclusion of visual information extracted by leveraging natural eye-hand coordination behavior and without placing additional cognitive burden on the user.

Grasp Type Estimation for Myoelectric Prostheses using Point Cloud Feature Learning

TLDR
This work augments the prosthetic hands with an off-the-shelf depth sensor to enable the prosthesis to see the object's depth, record a single view (2.5-D) snapshot, and estimate an appropriate grasp type; using a deep network architecture based on 3D point clouds called PointNet.

An Electro-Oculogram Based Vision System for Grasp Assistive Devices—A Proof of Concept Study

TLDR
The integration of the proposed system with the brain-controlled grasp assistive device and increasing the number of grasps can offer more natural manoeuvring in grasp for ALS patients.

Deep Learning Based Object Shape Identification from EOG Controlled Vision System

TLDR
A single webcam placed in a cap visor, synchronised to move in the same direction as per the user's eye gaze angle, recognised the object shape using Convolutional Neural Network and accuracy was around 93.0% with the realtime objects or object views that were not included in the training set.

From hand-perspective visual information to grasp type probabilities: deep learning via ranking labels

TLDR
This work builds a novel probabilistic classifier according to the Plackett-Luce model to predict the probability distribution over grasps, and exploits the statistical model over label rankings to solve the permutation domain problems via a maximum likelihood estimation.

A low-cost Raspberry PI-based vision system for upper-limb prosthetics

  • R. RoyK. Nazarpour
  • Computer Science
    2020 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS)
  • 2020
TLDR
A vision system with Raspberry PI along with the PI camera that provides the efficiency of deep learning models in object identification without using of any GPU and was tested in the cluttered environment in recognising four different grasp types.

Video-based Prediction of Hand-grasp Preshaping with Application to Prosthesis Control

TLDR
This paper investigates the use of a portable, forearm-mounted, video-based technique for the prediction of hand-grasp preshaping for arbitrary objects, and selects a model that shows promising results for realistic, intuitive, real-world use.

Vision-Based Assistance for Myoelectric Hand Control

TLDR
The proposed method is able to determine the target object by estimating the positional relationship between the artificial hand and the objects, as well as the motion of the hand, and can accurately estimate thetarget object in accordance with the user’s intention.

Visual Cues to Improve Myoelectric Control of Upper Limb Prostheses

TLDR
The method presented automatically detects stable gaze fixations and uses the visual characteristics of the fixated objects to improve the performance of a multimodal grasp classifier, and identifies online the onset of a prehension and the corresponding gazefixations and combines them with traditional surface electromyography in the classification stage.
...

References

SHOWING 1-10 OF 72 REFERENCES

An exploratory study on the use of convolutional neural networks for object grasp classification

TLDR
The preliminary, yet promising, results suggest that the additional machine vision system can provide prosthetic hands with the ability to detect object and propose the user an appropriate grasp.

Cognitive vision system for control of dexterous prosthetic hands: Experimental evaluation

TLDR
The original outcome of this research is a novel controller empowered by vision and reasoning and capable of high-level analysis and autonomous decision making and selecting the grasp type and size.

Abstract and Proportional Myoelectric Control for Multi-Fingered Hand Prostheses

TLDR
It is concluded that findings on myoelectric control principles, studied in abstract, virtual tasks can be transferred to real-life prosthetic applications.

Electromyography data for non-invasive naturally-controlled robotic hand prostheses

TLDR
This work aims to close this gap by allowing worldwide research groups to develop and test movement recognition and force control algorithms on a benchmark scientific database, with the final goal of developing non-invasive, naturally controlled, robotic hand prostheses.

Surface EMG in advanced hand prosthetics

TLDR
It is shown that machine learning, together with a simple downsampling algorithm, can be effectively used to control on-line, in real time, finger position as well as finger force of a highly dexterous robotic hand.

Stereovision and augmented reality for closed-loop control of grasping in hand prostheses.

TLDR
A controller based on stereovision to automatically select grasp type and size and augmented reality (AR) to provide artificial proprioceptive feedback and is an effective interface applicable with small alterations for many advanced prosthetic and orthotic/therapeutic rehabilitation devices.

Sensor fusion and computer vision for context-aware control of a multi degree-of-freedom prosthesis.

TLDR
A novel context- and user-aware prosthesis (CASP) controller integrating computer vision and inertial sensing with myoelectric activity in order to achieve semi-autonomous and reactive control of a prosthetic hand is developed.

Classification of Finger Movements for the Dexterous Hand Prosthesis Control With Surface Electromyography

TLDR
Assessment of the use of multichannel surface electromyography (sEMG) to classify individual and combined finger movements for dexterous prosthetic control shows that finger and thumb movements can be decoded accurately with high accuracy with latencies as short as 200 ms.

Transradial prosthesis: artificial vision for control of prehension.

TLDR
The presented system is only one component of the hand controller, related strictly to the prehension phase of grasping, and showed to be very robust with respect to the estimation errors, and the correct control commands were generated in most of the tested cases.

Real-time classification of multi-modal sensory data for prosthetic hand control

TLDR
The results suggest that using extra modalities along with sEMG might be more beneficial than including additional s EMG sensors, even when the amount of sensors used was less than half in the former case.
...