Gesture Recognition in Robotic Surgery: A Review

@article{vanAmsterdam2021GestureRI,
  title={Gesture Recognition in Robotic Surgery: A Review},
  author={Beatrice van Amsterdam and Matthew J. Clarkson and Danail Stoyanov},
  journal={IEEE Transactions on Biomedical Engineering},
  year={2021},
  volume={68},
  pages={2021-2035}
}
Objective: Surgical activity recognition is a fundamental step in computer-assisted interventions. This paper reviews the state-of-the-art in methods for automatic recognition of fine-grained gestures in robotic surgery focusing on recent data-driven approaches and outlines the open questions and future research directions. Methods: An article search was performed on 5 bibliographic databases with the following search terms: robotic, robot-assisted, JIGSAWS, surgery, surgical, gesture, fine… Expand

Figures and Tables from this paper

Neural Network Based Lidar Gesture Recognition for Realtime Robot Teleoperation
TLDR
A novel low-complexity lidar gesture recognition system for mobile robot control robust to gesture variation that uses data augmentation and automated labeling techniques, requiring a minimal amount of data collection and avoiding the need for manual labeling. Expand
Rethinking Autonomous Surgery: Focusing on Enhancement over Autonomy.
TLDR
An overview of the engineering requirements for automating control systems, technical challenges in automated robotic surgery, and sensing and modeling techniques to capture real-time human behaviors for integration into the robotic control loop for enhanced shared or collaborative control are provided. Expand
SAGES consensus recommendations on an annotation framework for surgical video
TLDR
While additional work remains to achieve accepted standards for video annotation in surgery, the consensus recommendations on a general framework for annotation presented here lay the foundation for standardization. Expand
A Multiple-Instance Learning Approach for the Assessment of Gallbladder Vascularity from Laparoscopic Images
TLDR
A multiple-instance learning (MIL) technique for assessment of the GB wall vascularity via computer-vision analysis of images from LC operations, which does not require the timeconsuming task of manual labelling the instances. Expand
A Digital Twin Approach for Contextual Assistance for Surgeons During Surgical Robotics Training
TLDR
The presented approach accelerates a proficient use of the robotic system for novice surgeons by augmenting the surgeon’s performance through haptic assistance by proposing a Shared Control Parametrization Engine that retrieves procedural context information from a Digital Twin. Expand
Simulation and Beyond – Principles of Effective Obstetric Training

References

SHOWING 1-10 OF 96 REFERENCES
Automatic Gesture Recognition in Robot-assisted Surgery with Reinforcement Learning and Tree Search
TLDR
A framework based on reinforcement learning and tree search for joint surgical gesture segmentation and classification that consistently outperforms the existing methods on the suturing task of JIGSAWS dataset in terms of accuracy, edit score and F1 score. Expand
A Dataset and Benchmarks for Segmentation and Recognition of Gestures in Robotic Surgery
TLDR
The results reported in this paper provide the first systematic and uniform evaluation of surgical activity recognition techniques on the benchmark database, a public dataset that is created to support comparative research benchmarking. Expand
Using 3D Convolutional Neural Networks to Learn Spatiotemporal Features for Automatic Surgical Gesture Recognition in Video
TLDR
This work proposes to use a 3D Convolutional Neural Network to learn spatiotemporal features from consecutive video frames to achieve high frame-wise surgical gesture recognition accuracies, outperforming comparable models that either extract only spatial features or model spatial and low-level temporal information separately. Expand
Towards automatic skill evaluation: Detection and segmentation of robot-assisted surgical motions
TLDR
Preliminary results suggest that gesture-specific features can be extracted to provide highly accurate surgical skill evaluation in a labeled sequence of surgical gestures. Expand
Surgical Gesture Classification from Video Data
TLDR
This paper shows that in a typical surgical training setup, video data can be equally discriminative and proposes and evaluates three approaches to surgical gesture classification from video. Expand
Symmetric Dilated Convolution for Surgical Gesture Recognition
TLDR
A novel temporal convolutional architecture to automatically detect and segment surgical gestures with corresponding boundaries only using RGB videos is proposed with a symmetric dilation structure bridged by a self-attention module to encode and decode the long-term temporal patterns and establish the frame-to-frame relationship accordingly. Expand
Soft Boundary Approach for Unsupervised Gesture Segmentation in Robotic-Assisted Surgery
TLDR
A new segmentation algorithm, namely soft-boundary unsupervised gesture segmentation (Soft-UGS), to segment the temporal sequence of surgical gestures and model gradual transitions between them using fuzzy membership scores is developed. Expand
Surgical Gesture Segmentation and Recognition
TLDR
This paper proposes a framework for joint segmentation and recognition of surgical gestures from kinematic and video data using a combined Markov/semi-Markov conditional random field (MsM-CRF) model, and shows that the proposed model improves over a Markov or semi- Markov CRF when using video data alone, gives results that are comparable to state-of-the-art methods on kinematics data alone. Expand
Unsupervised Trajectory Segmentation for Surgical Gesture Recognition in Robotic Training
TLDR
A new unsupervised algorithm that can automatically segment kinematic data from robotic training sessions and obtain an accurate recognition of the gestures involved in the surgical training task is proposed. Expand
Deep learning-based computer vision to recognize and classify suturing gestures in robot-assisted surgery
TLDR
Computer vision's ability to recognize features that not only can identify the action of suturing but also distinguish between different classifications of sUTuring gestures demonstrates the potential to utilize deep learning computer vision toward future automation of surgical skill assessment. Expand
...
1
2
3
4
5
...