• Corpus ID: 49293178

Conditional Affordance Learning for Driving in Urban Environments

  title={Conditional Affordance Learning for Driving in Urban Environments},
  author={Axel Sauer and Nikolay Savinov and Andreas Geiger},
Most existing approaches to autonomous driving fall into one of two categories: modular pipelines, that build an extensive model of the environment, and imitation learning approaches, that map images directly to control outputs. [] Key Result In addition, our approach is the first to handle traffic lights and speed signs by using image-level labels only, as well as smooth car-following, resulting in a significant reduction of traffic accidents in simulation.

Figures and Tables from this paper

Affordance-based Reinforcement Learning for Urban Driving

This work proposes a deep reinforcement learning framework to learn optimal control policy using waypoints and low-dimensional visual representations, also known as affordances and demonstrates that its agents when trained from scratch learn the tasks of lane-following, driving around intersections as well as stopping in front of other actors or traffic lights even in the dense traffic setting.

Conditional Vehicle Trajectories Prediction in CARLA Urban Environment

It is shown that complex urban situations can be handled with raw signal input and mid-level representation, and an original architecture inspired from social pooling LSTM is proposed taking low and mid level data as input and producing trajectories as polynomials of time.

Yaw-Guided Imitation Learning for Autonomous Driving in Urban Environments

This paper proposes a yaw-guided imitation learning method to improve the road option performance in an end-to-end autonomous driving paradigm in terms of the efficiency of exploiting training samples and adaptability to changing environments and reveals some causal relationship between decision-making and scene perception.

Multi-task Learning with Attention for End-to-end Autonomous Driving

A novel multi-task attention-aware network in the conditional imitation learning (CIL) framework is proposed, which does not only improve the success rate of standard benchmarks, but also the ability to react to traffic lights, which is shown with standard benchmarks.

Affordance Learning In Direct Perception for Autonomous Driving

This work follows the direct perception based method to train a deep neural network for affordance learning in autonomous driving based on freely available Google Street View panoramas and Open Street Map road vector attributes and indicates that this method could act as a cheaper way for training data collection in autonomousdriving.

Multi-Task Conditional Imitation Learning for Autonomous Navigation at Crowded Intersections

A multi-task conditional imitation learning framework is proposed to adapt both lateral and longitudinal control tasks for safe and efficient interaction in autonomous navigation at crowded intersections that require interaction with pedestrians.

Explaining Autonomous Driving by Learning End-to-End Visual Attention

This work proposes to train an imitation learning based agent equipped with an attention model that allows us to understand what part of the image has been deemed most important and leads to superior performance in a standard benchmark using the CARLA driving simulator.

Navigation Command Matching for Vision-based Autonomous Driving

The proposed NCM model improves generalizability of the agent and obtains good performance even in unseen scenarios, and outperforms previous state-of-the- art models on various tasks in terms of the percentage of successfully completed episodes.

CADRE: A Cascade Deep Reinforcement Learning Framework for Vision-based Autonomous Urban Driving

A novel CAscade Deep REinforcement learning framework, CADRE, to achieve model-free vision-based autonomous urban driving and the experimental results well justify the effectiveness of CADRE and its superiority over the state-of-the-art by a wide margin.



End-to-End Driving Via Conditional Imitation Learning

This work evaluates different architectures for conditional imitation learning in vision-based driving and conducts experiments in realistic three-dimensional simulations of urban driving and on a 1/5 scale robotic truck that is trained to drive in a residential area.

Driving Policy Transfer via Modularity and Abstraction

This work presents an approach to transferring driving policies from simulation to reality via modularity and abstraction, inspired by classic driving systems and aims to combine the benefits of modular architectures and end-to-end deep learning approaches.

End to End Learning for Self-Driving Cars

A convolutional neural network is trained to map raw pixels from a single front-facing camera directly to steering commands and it is argued that this will eventually lead to better performance and smaller systems.

Deep learning algorithm for autonomous driving using GoogLeNet

The proposed deep learning based algorithm is referred to as GoogLenet for Autonomous Driving (GLAD), and it uses only five affordance parameters to control the vehicle as compared to the 14 parameters used by prior efforts.

Off-Road Obstacle Avoidance through End-to-End Learning

A vision-based obstacle avoidance system for off-road mobile robots that is trained from end to end to map raw input images to steering angles and exhibits an excellent ability to detect obstacles and navigate around them in real time at speeds of 2 m/s.

CARLA: An Open Urban Driving Simulator

This work introduces CARLA, an open-source simulator for autonomous driving research, and uses it to study the performance of three approaches to autonomous driving: a classic modular pipeline, an end-to-end model trained via imitation learning, and an end to-end models trained via reinforcement learning.

Autonomous driving in urban environments: Boss and the Urban Challenge

This dissertation aims to provide a history of web exceptionalism from 1989 to 2002, a period chosen in order to explore its roots as well as specific cases up to and including the year in which descriptions of “Web 2.0” began to circulate.

ALVINN: An Autonomous Land Vehicle in a Neural Network

ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following that can effectively follow real roads under certain field conditions.

End-to-End Learning of Driving Models from Large-Scale Video Datasets

This work advocates learning a generic vehicle motion model from large scale crowd-sourced video data, and develops an end-to-end trainable architecture for learning to predict a distribution over future vehicle egomotion from instantaneous monocular camera observations and previous vehicle state.

3D Traffic Scene Understanding From Movable Platforms

A novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene is presented.