SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving

  title={SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving},
  author={Zhenpei Yang and Yuning Chai and Dragomir Anguelov and Y. Zhou and Pei Sun and D. Erhan and Sean Rafferty and Henrik Kretzschmar},
  journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
Autonomous driving system development is critically dependent on the ability to replay complex and diverse traffic scenarios in simulation. In such scenarios, the ability to accurately simulate the vehicle sensors such as cameras, lidar or radar is hugely helpful. However, current sensor simulators leverage gaming engines such as Unreal or Unity, requiring manual creation of environments, objects, and material properties. Such approaches have limited scalability and fail to produce realistic… Expand
GeoSim: Photorealistic Image Simulation with Geometry-Aware Composition
There and Back Again: Learning to Simulate Radar Data for Real-World Applications
A Novel Starlight-RGB Colorization Method Based on Image Pair Generation for Autonomous Driving
Evaluation of Virtual Methods for Training Neural Networks in Agricultural Applications
Emergent Road Rules In Multi-Agent Driving Environments
Shape As Points: A Differentiable Poisson Solver


AADS: Augmented autonomous driving simulation using data-driven algorithms
Scalability in Perception for Autonomous Driving: Waymo Open Dataset
The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes
Fast and Furious: Real Time End-to-End 3D Detection, Tracking and Motion Forecasting with a Single Convolutional Net
Matterport3D: Learning from RGB-D Data in Indoor Environments
The Cityscapes Dataset for Semantic Urban Scene Understanding
Neural Point-Based Graphics
Accurate, Dense, and Robust Multiview Stereopsis
ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst
Semantic Scene Completion from a Single Depth Image