Feudal Steering: Hierarchical Learning for Steering Angle Prediction

  title={Feudal Steering: Hierarchical Learning for Steering Angle Prediction},
  author={Faith Johnson and K. Dana},
  journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
  • Faith Johnson, K. Dana
  • Published 2020
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
We consider the challenge of automated steering angle prediction for self driving cars using egocentric road images. In this work, we explore the use of feudal networks, used in hierarchical reinforcement learning (HRL), to devise a vehicle agent to predict steering angles from first person, dash-cam images of the Udacity driving dataset. Our method, Feudal Steering, is inspired by recent work in HRL consisting of a manager network and a worker network that operate on different temporal scales… Expand
Steering Angle Prediction Techniques for Autonomous Ground Vehicles: A Review
This comprehensive review attempts to provide a clear picture of both approaches of steering angle prediction in the form of step by step procedures and discusses open research problems to help the researchers of this area to discover new research horizons. Expand
Vision-Guided Forecasting - Visual Context for Multi-Horizon Time Series Forecasting
This paper shows that it is able to forecast a vehicle’s state to various horizons, while outperforming the current state of the art results on the related task of driving state estimation. Expand
This project explores how raw image data obtained from AV cameras can provide a model with more spatial information than can be learned from simple RGB images alone. This paper leverages the advancesExpand


CIRL: Controllable Imitative Reinforcement Learning for Vision-based Self-driving
This work presents a general and principled Controllable Imitative Reinforcement Learning (CIRL) approach which successfully makes the driving agent achieve higher success rates based on only vision inputs in a high-fidelity car simulator, which performs better than supervised imitation learning. Expand
Deep Steering: Learning End-to-End Driving Model from Spatial and Temporal Visual Cues
This work focuses on a vision-based model that directly maps raw input images to steering angles using deep networks, and utilizes a visual back-propagation scheme for discovering and visualizing image regions crucially influencing the final steering prediction. Expand
Learning to Steer by Mimicking Features from Heterogeneous Auxiliary Networks
This paper considerably improves the accuracy and robustness of predictions through heterogeneous auxiliary networks feature mimicking, a new and effective training method that provides us with much richer contextual signals apart from steering direction. Expand
End to End Learning for Self-Driving Cars
A convolutional neural network is trained to map raw pixels from a single front-facing camera directly to steering commands and it is argued that this will eventually lead to better performance and smaller systems. Expand
Latent Space Reinforcement Learning for Steering Angle Prediction
This work addresses the problem of learning driving policies for an autonomous agent in a high-fidelity simulator with a modular deep reinforcement learning approach to predict the steering angle of the car from raw images. Expand
Learning Navigation Subroutines from Egocentric Videos
The proposed method to learn hierarchical abstractions, or subroutines from egocentric video data of experts performing tasks, by learning a self-supervised inverse model on small amounts of random interaction data to pseudo-label the expert Egocentric videos with agent actions is demonstrated. Expand
FeUdal Networks for Hierarchical Reinforcement Learning
We introduce FeUdal Networks (FuNs): a novel architecture for hierarchical reinforcement learning. Our approach is inspired by the feudal reinforcement learning proposal of Dayan and Hinton, andExpand
Hierarchical Reinforcement Learning for Self-Driving Decision-Making without Reliance on Labeled Driving Data
This study presents a hierarchical reinforcement learning method for decision making of self-driving cars, which does not depend on a large amount of labelled driving data and comprehensively considers both high-level manoeuvre selection and low-level motion control in both lateral and longitudinal directions. Expand
Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention
  • Jinkyu Kim, J. Canny
  • Computer Science
  • 2017 IEEE International Conference on Computer Vision (ICCV)
  • 2017
This work uses a visual attention model to train a convolution network endto- end from images to steering angle and shows that the network causally cues on a variety of features that are used by humans while driving. Expand
Learning Navigation Subroutines by Watching Videos
This paper uses an inverse model trained on small amounts of interaction data to pseudo-label the passive first person videos with agent actions, and acquires visuo-motor subroutines from these videos by learning a latent intent-conditioned policy that predicts the inferred pseudo-actions from the corresponding image observations. Expand