SceneGen: Learning to Generate Realistic Traffic Scenes

@article{Tan2021SceneGenLT,
  title={SceneGen: Learning to Generate Realistic Traffic Scenes},
  author={Shuhan Tan and K. Wong and Shenlong Wang and Sivabalan Manivasagam and Mengye Ren and Raquel Urtasun},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={892-901}
}
  • Shuhan Tan, K. Wong, R. Urtasun
  • Published 16 January 2021
  • Computer Science
  • 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
We consider the problem of generating realistic traffic scenes automatically. Existing methods typically insert actors into the scene according to a set of hand-crafted heuristics and are limited in their ability to model the true complexity and diversity of real traffic scenes, thus inducing a content gap between synthesized traffic scenes versus real ones. As a result, existing simulators lack the fidelity necessary to train and test self-driving vehicles. To address this limitation, we… 
Towards Optimal Strategies for Training Self-Driving Perception Models in Simulation
TLDR
This work introduces both a principled way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator, and proposes ways to minimize the reality gap.
Semantically Controllable Scene Generation with Guidance of Explicit Knowledge
TLDR
A novel method to incorporate domain knowledge explicitly in the generation process to achieve semantically controllable scene generation by imposing semantic rules on properties of nodes and edges in the tree structure is introduced.
Self-Supervised Real-to-Sim Scene Generation
TLDR
Sim2SG is a self-supervised automatic scene generation technique for matching the distribution of real data that does not require supervision from the real-world dataset, thus making it applicable in situations for which such annotations are difficult to obtain.
Semantically Adversarial Driving Scenario Generation with Explicit Knowledge Integration
TLDR
A method to incorporate domain knowledge explicitly in the generation process to achieve the Semantically Adversarial Generation (SAG) and proposes a tree-structured variational auto-encoder to learn hierarchical scene representation.
CausalAF: Causal Autoregressive Flow for Goal-Directed Safety-Critical Scenes Generation
TLDR
This paper integrates causality as a prior into the safety-critical scene generation process and proposes a flow-based generative framework – Causal Autoregressive Flow (CausalAF).
A Survey on Safety-Critical Driving Scenario Generation - A Methodological Perspective
TLDR
This survey focuses on the algorithms of safety-critical scenario generation in autonomous driving and provides a comprehensive taxonomy of existing algorithms by dividing them into three categories: data-driven generation, adversarial generation, and knowledge-based generation.
BrandGAN: Unsupervised Structural Image Correction
TLDR
This work proposes a novel framework, called BrandGAN, that tackles image correction for hand-drawn images by leveraging StyleGAN’s projection and encoding vector feature manipulation, and proposed a novel GAN indexing technique, called GANdex, capable of finding encodings of novel images derived from the original dataset that share visual similarities with the input image.
A Survey on Safety-Critical Scenario Generation for Autonomous Driving – A Methodological Perspective
TLDR
A comprehensive taxonomy of existing algorithms of safety-critical scenario generation is provided by dividing them into three categories: data-driven generation, adversarial generation, and knowledge-based generation and extended to five main challenges of current works – fidelity, efficiency, diversity, transferability, controllability and the research opportunities lighted up by these challenges.
A Survey on Safety-critical Scenario Generation from Methodological Perspective
TLDR
This survey provides a comprehensive taxonomy of existing algorithms of safety-critical scenario generation by dividing them into three categories: data-driven generation, adversarial generation, and knowledge-based generation and discusses useful tools for scenario generation, including simulation platforms and packages.
Trust, but Verify: Cross-Modality Fusion for HD Map Change Detection
TLDR
Perhaps surprisingly, it is shown that learning-based formulations for solving the high-definition map change detection problem in the bird’s eye view and ego-view can generalize to real world distributions.
...
...

References

SHOWING 1-10 OF 66 REFERENCES
Meta-Sim2: Unsupervised Learning of Scene Structure for Synthetic Data Generation
TLDR
Meta-Sim aimed at automatically tuning parameters given a target collection of real images in an unsupervised way and uses Reinforcement Learning to train the model, and design a feature space divergence between the authors' synthesized and target images that is key to successful training.
A generative model for 3D urban scene understanding from movable platforms
TLDR
A principled generative model of 3D urban scenes that takes into account dependencies between static and dynamic features and derives a reversible jump MCMC scheme that is able to infer the geometric and topological properties of the scene layout and the semantic activities occurring in the scene.
LiDARsim: Realistic LiDAR Simulation by Leveraging the Real World
TLDR
This work develops a novel simulator that captures both the power of physics-based and learning-based simulation, and showcases LiDARsim's usefulness for perception algorithms-testing on long-tail events and end-to-end closed-loop evaluation on safety-critical scenarios.
Meta-Sim: Learning to Generate Synthetic Datasets
TLDR
Meta-Sim is proposed, which learns a generative model of synthetic scenes, and obtain images as well as its corresponding ground-truth via a graphics engine, and can greatly improve content generation quality over a human-engineered probabilistic scene grammar.
Generation of Scenes in Intersections for the Validation of Highly Automated Driving Functions
TLDR
A statistical approach to generate traffic scenes in intersections using a generic model which allows us to represent the scenes and the concept of Bayesian networks is used to fit the model onto a publicly accessible dataset and to infer traffic scenes from the model.
Structured Domain Randomization: Bridging the Reality Gap by Context-Aware Synthetic Data
TLDR
The power of SDR is demonstrated for the problem of 2D bounding box car detection, achieving competitive results on real data after training only on synthetic data and outperforms other approaches to generating synthetic data as well as real data collected in a different domain.
Understanding High-Level Semantics by Modeling Traffic Patterns
TLDR
A generative model of 3D urban scenes which is able to reason not only about the geometry and objects present in the scene, but also about the high-level semantics in the form of traffic patterns is proposed.
Deep convolutional priors for indoor scene synthesis
TLDR
This work presents a convolutional neural network based approach for indoor scene synthesis that generates scenes that are preferred over the baselines, and in some cases are equally preferred to human-created scenes.
Augmented LiDAR Simulator for Autonomous Driving
TLDR
This letter proposes a novel LiDAR simulator that augments real point cloud with synthetic obstacles (e.g., vehicles, pedestrians, and other movable objects) and describes the placement of obstacles that is critical for performance enhancement.
ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst
TLDR
The ChauffeurNet model can handle complex situations in simulation, and the perturbations then provide an important signal for these losses and lead to robustness of the learned model.
...
...