Explainable AI (XAI) for PHM of Industrial Asset: A State-of-The-Art, PRISMA-Compliant Systematic Review
@article{Nor2021ExplainableA, title={Explainable AI (XAI) for PHM of Industrial Asset: A State-of-The-Art, PRISMA-Compliant Systematic Review}, author={Ahmad Nazrie Bin Mohd Nor and Srinivasa Rao Pedapati and Masdi Muhammad}, journal={ArXiv}, year={2021}, volume={abs/2107.03869} }
A state-of-the-art systematic review on XAI applied to Prognostic and Health Management (PHM) of industrial asset is presented. The work attempts to provide an overview of the general trend of XAI in PHM, answers the question of accuracy versus explainability, investigates the extent of human role, explainability evaluation and uncertainty management in PHM XAI. Research articles linked to PHM XAI, in English language, from 2015 to 2021 are selected from IEEE Xplore, ScienceDirect, SpringerLink…
Figures and Tables from this paper
4 Citations
Application of Explainable AI (Xai) For Anomaly Detection and Prognostic of Gas Turbines with Uncertainty Quantification.
- Computer Science
- 2021
An anomaly detection and prognostic of gas turbines using Bayesian deep learning model with SHapley Additive exPlanations (SHAP) and uncertainty quantification is proposed, offering a comprehensive explanation package, assisting decision making.
A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions
- Computer ScienceArXiv
- 2022
In every facet of the forthcoming B5G era, including B 5G technologies such as RAN, zero-touch network management, E2E slicing, this survey emphasizes the role of XAI in them and the use cases that the general users would ultimately enjoy.
Explainable Artificial Intelligence for Anomaly Detection and Prognostic of Gas Turbines using Uncertainty Quantification with Sensor-Related Data
- Computer Science
- 2021
A new method of anomaly detection and prognostic for gas turbines using Bayesian deep learning and Shapley additive explanations (SHAP) and the ability to increase PHM performance confirms its value in AI-based reliability research.
Machinery Faults Prediction Using Ensemble Tree Classifiers: Bagging or Boosting?
- Computer ScienceAnnual Conference of the PHM Society
- 2021
This study shows that tree-based techniques are best suited to perform root cause analysis for each faulty state and establish rules for faulty conditions.
References
SHOWING 1-10 OF 176 REFERENCES
The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies
- Computer ScienceJ. Biomed. Informatics
- 2021
Explainable artificial intelligence models using real-world electronic health record data: a systematic scoping review
- Computer ScienceJ. Am. Medical Informatics Assoc.
- 2020
It is found that XAI evaluation in medicine has not been adequately and formally practiced, andple opportunities exist to advance XAI research in medicine.
Deep understanding in industrial processes by complementing human expertise with interpretable patterns of machine learning
- Computer ScienceExpert Syst. Appl.
- 2019
Interpretable logic tree analysis: A data-driven fault tree methodology for causality analysis
- Computer ScienceExpert Syst. Appl.
- 2019
A Modern Data-Mining Approach Based on Genetically Optimized Fuzzy Systems for Interpretable and Accurate Smart-Grid Stability Prediction
- Computer ScienceEnergies
- 2020
The application of a fuzzy rule-based classification system characterized by a genetically optimized interpretability-accuracy trade-off for transparent and accurate prediction of decentral smart grid control (DSGC) stability is applied.
Industrial Process Monitoring Based on Knowledge–Data Integrated Sparse Model and Two-Level Deviation Magnitude Plots
- Computer Science
- 2018
A novel knowledge–data integrated sparse monitoring (KDISM) model and two-level deviation magnitude plots are proposed for industrial process monitoring.
Fault diagnosis in industrial chemical processes using interpretable patterns based on Logical Analysis of Data
- Computer ScienceExpert Syst. Appl.
- 2018
Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics
- Computer ScienceElectronics
- 2021
A comprehensive overview of methods proposed in the current literature for the evaluation of ML explanations is presented, finding that the quantitative metrics for both model-based and example-based explanations are primarily used to evaluate the parsimony/simplicity of interpretability, and subjective measures have been embraced as the focal point for the human-centered evaluation of explainable systems.