• Corpus ID: 236950457

Analysis of Driving Scenario Trajectories with Active Learning

@article{Jarl2021AnalysisOD,
  title={Analysis of Driving Scenario Trajectories with Active Learning},
  author={Sanna Jarl and Sadegh Rahrovani and Morteza Haghir Chehreghani},
  journal={ArXiv},
  year={2021},
  volume={abs/2108.03217}
}
Annotating the driving scenario trajectories based only on explicit rules (i.e., knowledge-based methods) can be subject to errors, such as false positive/negative classification of scenarios that lie on the border of two scenario classes, missing unknown scenario classes and also anomalies. On the other side, verifying the labels by the annotators is not cost-efficient. For this purpose, active learning (AL) could potentially improve the annotation procedure by inclusion of an annotator/expert… 

Figures from this paper

Passive and Active Learning of Driver Behavior from Electric Vehicles
TLDR
This work investigates non-recurrent architectures such as self-attention models and convolutional neural networks with joint recurrence plots, and compares them with recurrent models, and demonstrates that some active sampling techniques can outperform random sampling, and therefore decrease the effort for annotation.

References

SHOWING 1-10 OF 23 REFERENCES
A Generic Framework for Clustering Vehicle Motion Trajectories
TLDR
An effective non-parametric trajectory clustering framework consisting of five stages aimed at aligning trajectories and quantifying their pairwise temporal dissimilarities achieves promising results, despite the complexity caused by having trajectories of varying length.
A Deep Learning Framework for Generation and Analysis of Driving Scenario Trajectories
TLDR
A unified deep learning framework for generation and analysis of driving scenario trajectories, and validate its effectiveness in a principled way by adapting the Recurrent Conditional Generative Adversarial Networks by conditioning on the length of the trajectories.
Consistency-Based Semi-Supervised Active Learning: Towards Minimizing Labeling Cost
TLDR
A consistency-based sample selection metric that is coherent with the training objective such that the selected samples are effective at improving model performance and a measure that is empirically correlated with the AL target loss and is potentially useful for determining the proper starting point of learning-based AL methods.
Active Learning for Convolutional Neural Networks: A Core-Set Approach
TLDR
This work defines the problem of active learning as core-set selection as choosing set of points such that a model learned over the selected subset is competitive for the remaining data points, and presents a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints.
Active Learning Literature Survey
TLDR
This report provides a general introduction to active learning and a survey of the literature, including a discussion of the scenarios in which queries can be formulated, and an overview of the query strategy frameworks proposed in the literature to date.
Deep Bayesian Active Learning with Image Data
TLDR
This paper develops an active learning framework for high dimensional data, a task which has been extremely challenging so far, with very sparse existing literature, and demonstrates its active learning techniques with image data, obtaining a significant improvement on existing active learning approaches.
Model-Centric and Data-Centric Aspects of Active Learning for Neural Network Models
TLDR
This work investigates incremental and cumulative training modes that specify how the currently labeled data are used for training and analyzes in detail the behavior of query strategies and their corresponding informativeness measures to propose more efficient querying and active learning paradigms.
Multi-class Ensemble-Based Active Learning
TLDR
Four approaches to measure ensemble disagreement, including margins, uncertainty sampling and entropy, are considered and evaluated empirically on various ensemble strategies for active learning and it is shown that margins outperform the other disagreement measures on three of four active learning strategies.
Visualizing Data using t-SNE
TLDR
A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Deep Active Learning for Anomaly Detection
TLDR
This work introduces a new layer that can be easily attached to any deep learning model designed for unsupervised anomaly detection to transform it into an active method, so that outliers can be separated from normal data effectively.
...
...