Corpus ID: 235790740

Unity Perception: Generate Synthetic Data for Computer Vision

  title={Unity Perception: Generate Synthetic Data for Computer Vision},
  author={S. Borkman and Adam Crespi and S. Dhakad and Sujoy Ganguly and Jonathan Hogins and Y. Jhang and Mohsen Kamalzadeh and Bowen Li and Steven Leal and Pete Parisi and Cesar Romero and Wesley Smith and Alex Thaman and Samuel Warren and Nupur Yadav},
We introduce the Unity Perception package which aims to simplify and accelerate the process of generating synthetic datasets for computer vision tasks by offering an easy-to-use and highly customizable toolset. This opensource package extends the Unity Editor and engine components to generate perfectly annotated examples for several common computer vision tasks. Additionally, it offers an extensible Randomization framework that lets the user quickly construct and configure randomized simulation… Expand
MINERVAS: Massive INterior EnviRonments VirtuAl Synthesis
MINERVAS, a Massive INterior EnviRonments VirtuAl Synthesis system, to facilitate the 3D scene modification and the 2D image synthesis for various vision tasks and empowers users to access commercial scene databases with millions of indoor scenes and protects the copyright of core data assets. Expand


The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes
This paper generates a synthetic collection of diverse urban images, named SYNTHIA, with automatically generated class annotations, and conducts experiments with DCNNs that show how the inclusion of SYnTHIA in the training stage significantly improves performance on the semantic segmentation task. Expand
NViSII: A Scriptable Tool for Photorealistic Image Generation
This work demonstrates the use of data generated by path tracing for training an object detector and pose estimator, showing improved performance in sim-to-real transfer in situations that are difficult for traditional raster-based renderers. Expand
An Annotation Saved is an Annotation Earned: Using Fully Synthetic Training for Object Instance Detection
This work proposes a novel method for creating purely synthetic training data for object detection using a large dataset of 3D background models and densely render them using full domain randomization to enable the training of detectors that outperform models trained with real data on a challenging evaluation dataset. Expand
AI2-THOR: An Interactive 3D Environment for Visual AI
AI2-THOR consists of near photo-realistic 3D indoor scenes, where AI agents can navigate in the scenes and interact with objects to perform tasks and facilitate building visually intelligent models. Expand
SSD: Single Shot MultiBox Detector
The approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location, which makes SSD easy to train and straightforward to integrate into systems that require a detection component. Expand
Microsoft COCO: Common Objects in Context
We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of sceneExpand
Domain Randomization and Generative Models for Robotic Grasping
A novel data generation pipeline for training a deep neural network to perform grasp planning that applies the idea of domain randomization to object synthesis and can achieve a >90% success rate on previously unseen realistic objects at test time in simulation despite having only been trained on random objects. Expand
iGibson, a Simulation Environment for Interactive Tasks in Large Realistic Scenes
It is shown that the full interactivity of the scenes enables agents to learn useful visual representations that accelerate the training of downstream manipulation tasks and that the human-iGibson interface and integrated motion planners facilitate efficient imitation learning of simple human demonstrated behaviors. Expand
The Pascal Visual Object Classes (VOC) Challenge
The state-of-the-art in evaluated methods for both classification and detection are reviewed, whether the methods are statistically different, what they are learning from the images, and what the methods find easy or confuse. Expand
YOLACT: Real-Time Instance Segmentation
We present a simple, fully-convolutional model for real-time instance segmentation that achieves 29.8 mAP on MS COCO at 33.5 fps evaluated on a single Titan Xp, which is significantly faster than anyExpand