Learn More
This paper reports on the problem of map-based visual localization in urban environments for autonomous vehicles. Self-driving cars have become a reality on roadways and are going to be a consumer product in the near future. One of the most significant road-blocks to autonomous vehicles is the prohibitive cost of the sensor suites necessary for(More)
This paper reports on a fast multiresolution scan matcher for vehicle localization in urban environments for self-driving cars. State-of-the-art approaches to vehicle localization rely on observing road surface reflectivity with a three-dimensional (3D) light detection and ranging (LIDAR) scanner to achieve centimeter-level accuracy. However, these(More)
Fig. 1: Real-time visualization of the proposed system for autonomously landing a UAV on a transiting ship. The UAV estimates its relative-pose to the landing platform through observations of fiducial markers and an EKF. Our visualization displays the UAV trajectory, control commands, waypoints, fiducial marker detections, estimated poses of the UAV and(More)
This paper reports on visual obstacle detection from a monocular camera for autonomous vehicles. By leveraging a textured prior map, we propose a probabilistic formulation for finding the optimal image partition that separates obstacles from ground-plane. Our key insight is the use of a prior map that enables ground appearance models conditioned on prior(More)
2016 ACKNOWLEDGMENTS This has been quite a journey. With ups and downs, countless deadlines and just as many last minutes, and I am so grateful for the support that I've had throughout it all. Without my advisors, family, and friends, this work would not have been possible. Thank you to all. First, I'd like to thank my advisor Prof. Ryan Eustice. Your watch(More)
Many autonomous systems require the ability to perceive and understand motion in a dynamic environment. We present a novel algorithm that estimates this motion from raw LIDAR data in real-time without the need for segmentation or model-based tracking. The sensor data is first used to construct an occupancy grid. The foreground is then extracted via a(More)
  • 1