• Corpus ID: 235765654

Explainable AI (XAI) for PHM of Industrial Asset: A State-of-The-Art, PRISMA-Compliant Systematic Review

@article{Nor2021ExplainableA,
  title={Explainable AI (XAI) for PHM of Industrial Asset: A State-of-The-Art, PRISMA-Compliant Systematic Review},
  author={Ahmad Nazrie Bin Mohd Nor and Srinivasa Rao Pedapati and Masdi Muhammad},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.03869}
}
A state-of-the-art systematic review on XAI applied to Prognostic and Health Management (PHM) of industrial asset is presented. The work attempts to provide an overview of the general trend of XAI in PHM, answers the question of accuracy versus explainability, investigates the extent of human role, explainability evaluation and uncertainty management in PHM XAI. Research articles linked to PHM XAI, in English language, from 2015 to 2021 are selected from IEEE Xplore, ScienceDirect, SpringerLink… 
Application of Explainable AI (Xai) For Anomaly Detection and Prognostic of Gas Turbines with Uncertainty Quantification.
TLDR
An anomaly detection and prognostic of gas turbines using Bayesian deep learning model with SHapley Additive exPlanations (SHAP) and uncertainty quantification is proposed, offering a comprehensive explanation package, assisting decision making.
A Survey on XAI for Beyond 5G Security: Technical Aspects, Use Cases, Challenges and Research Directions
TLDR
In every facet of the forthcoming B5G era, including B 5G technologies such as RAN, zero-touch network management, E2E slicing, this survey emphasizes the role of XAI in them and the use cases that the general users would ultimately enjoy.
Explainable Artificial Intelligence for Anomaly Detection and Prognostic of Gas Turbines using Uncertainty Quantification with Sensor-Related Data
TLDR
A new method of anomaly detection and prognostic for gas turbines using Bayesian deep learning and Shapley additive explanations (SHAP) and the ability to increase PHM performance confirms its value in AI-based reliability research.
Machinery Faults Prediction Using Ensemble Tree Classifiers: Bagging or Boosting?
TLDR
This study shows that tree-based techniques are best suited to perform root cause analysis for each faulty state and establish rules for faulty conditions.

References

SHOWING 1-10 OF 176 REFERENCES
Explainable artificial intelligence models using real-world electronic health record data: a systematic scoping review
TLDR
It is found that XAI evaluation in medicine has not been adequately and formally practiced, andple opportunities exist to advance XAI research in medicine.
Interpretable logic tree analysis: A data-driven fault tree methodology for causality analysis
A Modern Data-Mining Approach Based on Genetically Optimized Fuzzy Systems for Interpretable and Accurate Smart-Grid Stability Prediction
TLDR
The application of a fuzzy rule-based classification system characterized by a genetically optimized interpretability-accuracy trade-off for transparent and accurate prediction of decentral smart grid control (DSGC) stability is applied.
Industrial Process Monitoring Based on Knowledge–Data Integrated Sparse Model and Two-Level Deviation Magnitude Plots
TLDR
A novel knowledge–data integrated sparse monitoring (KDISM) model and two-level deviation magnitude plots are proposed for industrial process monitoring.
Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics
TLDR
A comprehensive overview of methods proposed in the current literature for the evaluation of ML explanations is presented, finding that the quantitative metrics for both model-based and example-based explanations are primarily used to evaluate the parsimony/simplicity of interpretability, and subjective measures have been embraced as the focal point for the human-centered evaluation of explainable systems.
A review on the application of deep learning in system health management
...
...