• Corpus ID: 231879809

Driving Style Representation in Convolutional Recurrent Neural Network Model of Driver Identification

@article{Moosavi2021DrivingSR,
  title={Driving Style Representation in Convolutional Recurrent Neural Network Model of Driver Identification},
  author={Sobhan Moosavi and Pravar Dilip Mahajan and Srinivasan Parthasarathy and Colleen Saunders-Chukwu and Rajiv Ramnath},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.05843}
}
Identifying driving styles is the task of analyzing the behavior of drivers in order to capture variations that will serve to discriminate different drivers from each other. This task has become a prerequisite for a variety of applications, including usage-based insurance, driver coaching, driver action prediction, and even in designing autonomous vehicles; because driving style encodes essential information needed by these applications. In this paper, we present a deep-neural-network… 

Figures and Tables from this paper

Unsupervised Driving Behavior Analysis using Representation Learning and Exploiting Group-based Training

Current work performs a robust driving pattern analysis by capturing variations in driving patterns by learning com- pressed representation of time series using a multi-layer seq-2-seq autoencoder and exploiting hierarchical clustering along with recommend-ing the choice of best distance measure.

Contextual Driving Scene Perception from Anonymous Vehicle Bus Data for Automotive Applications

The Anonymous Driving Scene Perception (ADSP) Model is introduced, a novel deep neural network designed to classify anony-mous Controller Area Network (CAN)-bus data into multiple driving context domains, and demonstrates the feasibility of driving scene classification from anonymous CAN-bus data, without collecting sensitive data from users.

Reinforced Feature Extraction and Multi-Resolution Learning for Driver Mobility Fingerprint Identification

RM-Drive, a novel framework based on reinforced feature extraction and multi-resolution learning, first employs spatio-temporal inverse reinforcement learning (ST-IRL) to extract DMFs from historical trajectories, and generates trajectory embeddings by fusing the extracted DMFs and the contextual factors using the multi- resolution trajectory embedding network (MTE-Net).

Driver Identification Methods in Electric Vehicles, a Review

It is concluded that on-board sensor data in the natural driving state is objective and accurate and could be the main data source for driver identification.

References

SHOWING 1-10 OF 40 REFERENCES

Autoencoder Regularized Network For Driving Style Representation Learning

Experiments show that ARNet can learn a good generalized driving style representation and significantly outperforms existing methods and alternative architectures by reaching the least estimation error and the highest identification accuracy compared with traditional supervised learning methods.

Driver Action Prediction Using Deep (Bidirectional) Recurrent Neural Network

The proposed driver action prediction system incorporates camera-based knowledge of the driving environment and the driver themselves, in addition to traditional vehicle dynamics, and uses a deep bidirectional recurrent neural network to learn the correlation between sensory inputs and impending driver behavior achieving accurate and high horizon action prediction.

You Are How You Drive: Peer and Temporal-Aware Representation Learning for Driving Behavior Analysis

A Peer and Temporal-Aware Representation Learning based framework (PTARL) for driving behavior analysis with GPS trajectory data and develops a peer and temporal-aware representation learning method to learn a sequence of time-varying yet relational vectorized representations from the driving state transition graphs.

Beyond short snippets: Deep networks for video classification

This work proposes and evaluates several deep neural network architectures to combine image information across a video over longer time periods than previously attempted, and proposes two methods capable of handling full length videos.

Distinguishing Trajectories from Different Drivers using Incompletely Labeled Trajectories

A Trajectory-to-Image( T2I) encoding scheme that captures both geographic features and driving behavior features of trajectories in 3D images and a multi-task, deep learning model called T2INet for estimating the total number of drivers in the unlabeled trajectories.

Brain4Cars: Car That Knows Before You Do via Sensory-Fusion Deep Learning Architecture

A sensory-fusion deep learning architecture which jointly learns to anticipate and fuse multiple sensory streams and a novel training procedure which allows the network to predict the future given only a partial temporal context is proposed.

Modeling Spatial-Temporal Clues in a Hybrid Deep Learning Framework for Video Classification

This work proposes a hybrid deep learning framework for video classification, which is able to model static spatial information, short-term motion, as well as long-term temporal clues in the videos, and achieves very competitive performance on two popular and challenging benchmarks.

Drive2Vec: Multiscale State-Space Embedding of Vehicular Sensor Data

A deep learning-based method for embedding sensor data in a low-dimensional yet actionable form that outperforms other baselines by up to 90%, and it is demonstrated how these embeddings of sensor data can be used to solve a variety of real-world automotive applications.

A Hybrid Framework for Text Modeling with Convolutional RNN

In this paper, we introduce a generic inference hybrid framework for Convolutional Recurrent Neural Network (conv-RNN) of semantic modeling of text, seamless integrating the merits on extracting

Driver identification using vehicle acceleration and deceleration events from naturalistic driving of older drivers

A novel approach to driver identification based on classification using multiple in-vehicle sensor signals collected in naturalistic conditions with anonymized driving locations is provided.