One-shot domain adaptation in video-based assessment of surgical skills

  title={One-shot domain adaptation in video-based assessment of surgical skills},
  author={Erim Yanik and Steven Schwaitzberg and Gene Yang and Xavier Intes and Suvranu De},
Deep Learning (DL) has achieved automatic and objective assessment of surgical skills. However, DL models are data-hungry and restricted to their training domain. This prevents them from transitioning to new tasks where data is limited. Hence, domain adaptation is crucial to implement DL in real life. Here, we propose a meta-learning model, A-VBANet, that can deliver domain-agnostic surgical skill classification via one-shot learning. We develop the A-VBANet on five laparoscopic and robotic… 

Figures and Tables from this paper

Video-based surgical skill assessment using 3D convolutional neural networks

The results demonstrate the feasibility of deep learning-based assessment of technical skill from surgical video, and the 3D ConvNet is able to learn meaningful patterns directly from the data, alleviating the need for manual feature engineering.

One to Many: Adaptive Instrument Segmentation via Meta Learning and Dynamic Online Adaptation in Robotic Surgical Video

MDAL, a meta-learning based dynamic online adaptive learning scheme with a two-stage framework to fast adapt the model parameters on the first frame and partial subsequent frames while predicting the results, outperforms other state-of-the-art methods on two datasets (including a real-world RAS dataset).

Video-based formative and summative assessment of surgical tasks using deep learning

A deep learning (DL) model is proposed that can automatically and objectively provide a high-stakes summative assessment of surgical skill execution based on video feeds and low-stakes formative assessment to guide surgical skill acquisition.

Tool Detection and Operative Skill Assessment in Surgical Videos Using Region-Based Convolutional Neural Networks

This work introduces an approach to automatically assess surgeon performance by tracking and analyzing tool movements in surgical videos, leveraging region-based convolutional neural networks, and is the first to not only detect presence but also spatially localize surgical tools in real-world laparoscopic surgical videos.

SATR-DL: Improving Surgical Skill Assessment And Task Recognition In Robot-Assisted Surgery With Deep Neural Networks

  • Ziheng WangA. M. Fey
  • Computer Science
    2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
  • 2018
This work presents an efficient analytic framework with a parallel deep learning architecture, SATR-DL, to assess trainee expertise and recognize surgical training activity and highlights the potential of SATr-DL to provide improvements for an efficient data-driven assessment in intelligent robotic surgery.

Deep learning with convolutional neural network for objective skill evaluation in robot-assisted surgery

  • Ziheng WangA. M. Fey
  • Computer Science
    International Journal of Computer Assisted Radiology and Surgery
  • 2018
An analytical deep learning framework for skill assessment in surgical training that can successfully decode skill information from raw motion profiles via end-to-end learning and is able to reliably interpret skills within a 1–3 second window.

Evaluation of Deep Learning Models for Identifying Surgical Actions and Measuring Performance.

The proposed models and the accompanying results illustrate that deep machine learning can identify associations in surgical video clips and are the first steps to creating a feedback mechanism for surgeons that would allow them to learn from their experiences and refine their skills.

Evaluating robotic-assisted surgery training videos with multi-task convolutional neural networks

Evaluation of GEARS sub-categories with artificial neural networks is possible for novice and intermediate surgeons, but additional research is needed to understand if expert surgeons can be evaluated with a similar automated system.

Deep Neural Skill Assessment and Transfer: Application to Robotic Surgery Training

A novel deep-learning-based skill transfer scheme consisting of a deep convolutional model, SkillNet, and a skill transfer algorithm for robotic surgery training, which can be used as a high-performance filter that makes minor corrections to the input trajectory and improves the skill level of the trainee’s trajectory in practice.