Corpus ID: 236177826

Interpretable Machine Learning Models for Predicting and Explaining Vehicle Fuel Consumption Anomalies

@inproceedings{Barbado2020InterpretableML,
  title={Interpretable Machine Learning Models for Predicting and Explaining Vehicle Fuel Consumption Anomalies},
  author={A. Barbado and 'Oscar Corcho},
  year={2020}
}
Identifying anomalies in the fuel consumption of the vehicles of a fleet is a crucial aspect for optimizing consumption and reduce costs. However, this information alone is insufficient, since fleet operators need to know the causes behind anomalous fuel consumption. We combine unsupervised anomaly detection techniques, domain knowledge and interpretable Machine Learning models for explaining potential causes of abnormal fuel consumption in terms of feature relevance. The explanations are used… Expand

References

SHOWING 1-10 OF 81 REFERENCES
Application of machine learning for fuel consumption modelling of trucks
TLDR
The study shows the feasibility of using telematic data together with the information in HAPMS for the purpose of modelling fuel consumption, and shows that although all the three methods make it possible to develop models with good precision, the RF slightly outperforms SVM and ANN giving higher R2, and lower error. Expand
Self-organizing maps for anomaly detection in fuel consumption. Case study: Illegal fuel storage in Bolivia
TLDR
This work uses the unsupervised machine learning technique, Self-Organizational Maps (SOM), to extract patterns of consumption of vehicles and identify anomalies scores based on its own and group history behavior, and detects anomalies with 80% certainty. Expand
Impact of Driver Behavior on Fuel Consumption: Classification, Evaluation and Prediction Using Machine Learning
TLDR
This paper introduces two kinds of machine learning methods for evaluating the fuel efficiency of driving behavior using the naturalistic driving data and shows that the proposed method can effectively identify the relationship between the driving behavior and the fuel consumption on both macro and micro levels, allowing for end-to-end fuel consumption feature prediction. Expand
SafeDrive: Online Driving Anomaly Detection From Large-Scale Vehicle Data
TLDR
SafeDrive is proposed, an online and status-aware approach that is capable of identifying a variety of driving anomalies effectively from a large-scale vehicle data stream with an overall accuracy of 93%; such identified driving anomalies can be used to timely alert drivers to correct their driving behaviors. Expand
Neural Network Modeling for Fuel Consumption Base on Least Computational Cost Parameters
  • Ana Antoniette C. Illahi, A. Bandala, E. Dadios
  • Computer Science
  • 2019 IEEE 11th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management ( HNICEM )
  • 2019
TLDR
This study shows the ANN capability to predict the fuel consumption using MATLAB neural fitting tool and demonstrated that the system using neural network is efficient for predicting fuel consumption of an engine. Expand
Understanding the origins and variability of the fuel consumption gap: lessons learned from laboratory tests and a real-driving campaign
Background Divergence in fuel consumption (FC) between the type-approval tests and real-world driving trips, known also as the FC gap, is a well-known issue and Europe is preparing the field forExpand
A review of vehicle fuel consumption models to evaluate eco-driving and eco-routing
Abstract Fuel consumption models have been widely used to predict fuel consumption and evaluate new vehicle technologies. However, due to the uncertainty and high nonlinearity of fuel systems, it isExpand
From local explanations to global understanding with explainable AI for trees
TLDR
An explanation method for trees is presented that enables the computation of optimal local explanations for individual predictions, and the authors demonstrate their method on three medical datasets. Expand
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
TLDR
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction. Expand
Interpretable Machine Learning
Interpretable machine learning has become a popular research direction as deep neural networks (DNNs) have become more powerful and their applications more mainstream, yet DNNs remain difficult toExpand
...
1
2
3
4
5
...