• Corpus ID: 232428255

Long-Term Temporally Consistent Unpaired Video Translation from Simulated Surgical 3D Data

  title={Long-Term Temporally Consistent Unpaired Video Translation from Simulated Surgical 3D Data},
  author={Dominik Rivoir and Micha Pfeiffer and Reuben Docea and Fiona R. Kolbinger and Carina Riediger and J{\"u}rgen Weitz and Stefanie Speidel},
Research in unpaired video translation has mainly focused on short-term temporal consistency by conditioning on neighboring frames. However for transfer from simulated to photorealistic sequences, available information on the underlying geometry offers potential for achieving global consistency across views. We propose a novel approach which combines unpaired image translation with neural rendering to transfer simulated to photorealistic surgical abdominal scenes. By introducing global… 
Virtual Reality for Synergistic Surgical Training and Data Generation
  • A. Munawar, Zhaoshuo Li, +7 authors M. Unberath
  • Computer Science
    Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization
  • 2021
A cost-effective and synergistic framework, named Asynchronous Multibody Framework Plus (AMBF+), which generates data for downstream algorithm development simultaneously with users practicing their surgical skills, and how the data generated can be used for validating and training downstream computer vision algorithms.
Surgical Data Science - from Concepts toward Clinical Translation
Lena Maier-Heina,b,c,∗,1, Matthias Eisenmanna,1, Duygu Sarikayad,e, Keno Märza, Toby Collinsf, Anand Malpanig, Johannes Fallerth, Hubertus Feussneri, Stamatia Giannarouj, Pietro Mascagnik,l,


Preserving Semantic and Temporal Consistency for Unpaired Video-to-Video Translation
This paper proposes a new framework that is composed of carefully-designed generators and discriminators, coupled with two core objective functions: content preserving loss and temporal consistency loss, and demonstrates the superior performance of the proposed method against previous approaches.
World-Consistent Video-to-Video Synthesis
A novel vid2vid framework is introduced that efficiently and effectively utilizes all past generated frames during rendering, and a novel neural network architecture is proposed to take advantage of the information stored in the guidance images.
Improving Surgical Training Phantoms by Hyperrealism: Deep Unpaired Image-to-Image Translation from Real Surgeries
The overall approach is expected to change the future design of surgical training simulators since the generated sequences clearly demonstrate the feasibility to enable a considerably more realistic training experience for minimally-invasive procedures.
Mocycle-GAN: Unpaired Video-to-Video Translation
A new Motion-guided Cycle GAN, dubbed as Mocycle-GAN, that novelly integrates motion estimation into unpaired video translator that capitalizes on three types of constrains: adversarial constraint discriminating between synthetic and real frame, cycle consistency encouraging an inverse translation on both frame and motion, and motion translation validating the transfer of motion between consecutive frames.
Generating large labeled data sets for laparoscopic image processing tasks using unpaired image-to-image translation
This work extent an image-to-image translation method to generate a diverse multitude of realistically looking synthetic images based on images from a simple laparoscopy simulation, and shows that this data set can be used to train models for the task of liver segmentation of laparoscopic images.
High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
A new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs) is presented, which significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.
Towards realistic laparoscopic image generation using image-domain translation
Experimental results show that the proposed method for Minimally Invasive Surgery image synthesis is actually able to translate MIS segmentations to realistic MIS images, which can in turn be used to augment existing data sets and help at overcoming the lack of useful images.
Endo-Sim2Real: Consistency learning-based domain adaptation for instrument segmentation
This work proposes a consistency-based framework for joint learning of simulated and real (unlabeled) endoscopic data to bridge this performance generalization issue and shows that the proposed Endo-Sim2Real method for instrument segmentation improves segmentation both in terms of quality and quantity.
Geometric Image Synthesis
This work proposes a trainable, geometry-aware image generation method that leverages various types of scene information, including geometry and segmentation, to create realistic looking natural images that match the desired scene structure.
Contrastive Learning for Unpaired Image-to-Image Translation
The framework enables one-sided translation in the unpaired image-to-image translation setting, while improving quality and reducing training time, and can be extended to the training setting where each "domain" is only a single image.