• Corpus ID: 248405986

Explainable artificial intelligence for autonomous driving: An overview and guide for future research directions

@inproceedings{Atakishiyev2021ExplainableAI,
  title={Explainable artificial intelligence for autonomous driving: An overview and guide for future research directions},
  author={S. Atakishiyev and Mohammad Salameh and Hengshuai Yao and Randy Goebel},
  year={2021}
}
—Autonomous driving has achieved a significant mile- stone in research and development over the last decade. There is increasing interest in the field as the deployment of self- operating vehicles promises safer and more ecologically friendly transportation systems. With the rise of computationally powerful artificial intelligence (AI) techniques, autonomous vehicles can sense their environment with high precision, make safe real- time decisions, and operate reliably without human intervention… 

A Role for HTN Planning in Increasing Trust in Autonomous Driving

The adoption of autonomous vehicles mainly depends on the driver's trust in the vehicle's capabilities. Influencing trust requires giving it a central role when designing the vehicle's

A Review of Applications of Artificial Intelligence in Heavy Duty Trucks

Different applications of artificial intelligence in heavy-duty trucks, such as fuel consumption prediction, emissions estimation, self-driving technology, and predictive maintenance using various machine learning and deep learning methods, are discussed.

Real-Time Automatic Wall Detection and Localization based on Side Scan Sonar Images

The method proposed in this paper is a real-time automatic target recognition based on Side Scan Sonar images to detect and localize a harbor’s wall using transfer learning.

Universal Adversarial Attacks on the Raw Data From a Frequency Modulated Continuous Wave Radar

This work presents three possible attack methods that are particularly suitable for the radar domain and calculates universal adversarial attack patches for all sorts of radar applications based on NN, the first work that deals with calculating universal patches on raw radar data, of great importance especially for interference analysis.

ThirdEye: Attention Maps for Safe Autonomous Driving Systems

This paper evaluates the effectiveness of different configurations of ThirdEye at predicting simulation-based injected failures induced by both unknown conditions (adverse weather and lighting) and unsafe/uncertain conditions created with mutation testing and shows that, overall, ThirdEye can predict 98% misbehaviours, outperforming a state-of-the-art failure predictor for autonomous vehicles.

References

SHOWING 1-10 OF 167 REFERENCES

Towards safe, explainable, and regulated autonomous driving

A framework that integrates autonomous control, explainable AI, and regulatory compliance to address the issue of safe and explainable autonomous driving technology is proposed and validated with a critical analysis in a case study.

Explanations in Autonomous Driving: A Survey

This survey aims to provide the fundamental knowledge required of researchers who are interested in explainability in AVs, and identifies pertinent challenges and provide recommendations, such as a conceptual framework for AV explainability.

Human-Vehicle Cooperation on Prediction-Level: Enhancing Automated Driving with Human Foresight

An approach is implemented that lets a human driver quickly and intuitively supplement scene predictions to an autonomous driving system by gaze that has the potential to improve a system’s foresighted driving abilities and make autonomous driving more trustable, comfortable and personalized.

Perception as prediction using general value functions in autonomous driving applications

We propose and demonstrate a framework called perception as prediction for autonomous driving that uses general value functions (GVFs) to learn predictions. Perception as prediction learns

AUTO-DISCERN: Autonomous Driving Using Common Sense Reasoning

The goal of the research is to develop an autonomous driving system that works by simulating the mind of a human driver by developing the AUTO-DISCERN 1 system using commonsense reasoning technology for automating decision-making in driving.

Advisable Learning for Self-Driving Vehicles by Internalizing Observation-to-Action Rules

The approach of training the autonomous system with human advice, grounded in a rich semantic representation, matches or outperforms prior work in terms of control prediction and explanation generation and results in more interpretable visual explanations by visualizing object-centric attention maps.

Interpretable Goal Recognition in the Presence of Occluded Factors for Autonomous Vehicles

It is demonstrated that jointly inferring goals and occluded factors leads to more accurate beliefs with respect to the true world state and allows an agent to safely navigate several scenarios where other baselines take unsafe actions leading to collisions.

Interpretable Safety Validation for Autonomous Vehicles

This work describes an approach for finding interpretable failures of an autonomous system that is optimized to produce failures that have high likelihood while retaining interpretability.

The value of inferring the internal state of traffic participants for autonomous freeway driving

This research uses a simple model for human behavior with unknown parameters that make up the internal states of the traffic participants and presents a method for quantifying the value of estimating these states and planning with their uncertainty explicitly modeled.

Perception, Planning, Control, and Coordination for Autonomous Vehicles

Autonomous vehicles are expected to play a key role in the future of urban transportation systems, as they offer potential for additional safety, increased productivity, greater accessibility, better
...