• Corpus ID: 235185403

Detection and Segmentation of Custom Objects using High Distraction Photorealistic Synthetic Data

@article{Ron2020DetectionAS,
  title={Detection and Segmentation of Custom Objects using High Distraction Photorealistic Synthetic Data},
  author={Roey Ron and Gil Elbaz},
  journal={arXiv: Computer Vision and Pattern Recognition},
  year={2020}
}
  • Roey RonGil Elbaz
  • Published 28 July 2020
  • Computer Science
  • arXiv: Computer Vision and Pattern Recognition
We show a straightforward and useful methodology for performing instance segmentation using synthetic data. We apply this methodology on a basic case and derived insights through quantitative analysis. We created a new public dataset: The Expo Markers Dataset intended for detection and segmentation tasks. This dataset contains 5,000 synthetic photorealistic images with their corresponding pixel-perfect segmentation ground truth. The goal is to achieve high performance on manually-gathered and… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 16 REFERENCES

The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes

This paper generates a synthetic collection of diverse urban images, named SYNTHIA, with automatically generated class annotations, and conducts experiments with DCNNs that show how the inclusion of SYnTHIA in the training stage significantly improves performance on the semantic segmentation task.

Microsoft COCO: Common Objects in Context

We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene

SIDOD: A Synthetic Image Dataset for 3D Object Pose Recognition With Distractors

A new, publicly-available image dataset generated by the NVIDIA Deep Learning Data Synthesizer intended for use in object detection, pose estimation, and tracking applications, and the approach for domain randomization is described.

Training Deep Networks with Synthetic Data: Bridging the Reality Gap by Domain Randomization

This work presents a system for training deep neural networks for object detection using synthetic images that relies upon the technique of domain randomization, in which the parameters of the simulator are randomized in non-realistic ways to force the neural network to learn the essential features of the object of interest.

Semantic-aware Grad-GAN for Virtual-to-Real Urban Scene Adaption

This work proposes a novel Semantic-aware Grad-GAN (SG-GAN) to perform virtual-to-real domain adaption with the ability of retaining vital semantic information and presents two main contributions to traditional GANs: a soft gradient-sensitive objective for keeping semantic boundaries and a semantic-aware discriminator for validating the fidelity of personalized adaptions with respect to each semantic region.

Domain randomization for transferring deep neural networks from simulation to the real world

This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator, and achieves the first successful transfer of a deep neural network trained only on simulated RGB images to the real world for the purpose of robotic control.

Meta-Sim: Learning to Generate Synthetic Datasets

Meta-Sim is proposed, which learns a generative model of synthetic scenes, and obtain images as well as its corresponding ground-truth via a graphics engine, and can greatly improve content generation quality over a human-engineered probabilistic scene grammar.

Revisiting Unreasonable Effectiveness of Data in Deep Learning Era

It is found that the performance on vision tasks increases logarithmically based on volume of training data size, and it is shown that representation learning (or pre-training) still holds a lot of promise.

Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks

This work presents an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples, and introduces a cycle consistency loss to push F(G(X)) ≈ X (and vice versa).

Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks

The architecture introduced in this paper learns a mapping function G : X 7→ Y using an adversarial loss such thatG(X) cannot be distinguished from Y , whereX and Y are images belonging to two