• Publications
  • Influence
What-If Motion Prediction for Autonomous Driving
TLDR
This work proposes a recurrent graph-based attentional approach with interpretable geometric and social relationships that supports the injection of counterfactual geometric goals and social contexts that could be used in the planning loop to reason about unobserved causes or unlikely futures that are directly relevant to the AV's intended route.
Learning to Move with Affordance Maps
TLDR
This paper designs an agent that learns to predict a spatial affordance map that elucidates what parts of a scene are navigable through active self-supervised experience gathering, and shows that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.
Argoverse 2: Next Generation Datasets for Self-Driving Perception and Forecasting
TLDR
The Argoverse 2 (AV2) — a collection of three datasets for perception and forecasting research in the self-driving domain that supports self-supervised learning and the emerging task of point cloud forecasting is introduced.
Learning Representations for Safe Autonomous Movement
TLDR
The benefits of the proposed framework on the challenging Argoverse vehicle motion forecasting dataset are demonstrated, outperforming previous birds-eye-view (BEV) representation-based methods and setting a new benchmark for prediction quality.