An Interpretable Probabilistic Approach for Demystifying Black-box Predictive Models

@article{Moreira2021AnIP,
  title={An Interpretable Probabilistic Approach for Demystifying Black-box Predictive Models},
  author={Catarina Moreira and Yu-Liang Chou and Mythreyi Velmurugan and Chun Ouyang and Renuka Sindhgatta and Peter Bruza},
  journal={Decis. Support Syst.},
  year={2021},
  volume={150},
  pages={113561}
}

Benchmarking Counterfactual Algorithms for XAI: From White Box to Black Box

This study investigates the impact of machine learning models on the generation of counterfactual explanations by conducting a benchmark evaluation over three different types of models: decision-tree

Benchmark Evaluation of Counterfactual Algorithms for XAI: From a White Box to a Black Box

TLDR
All explainable counterfactual algorithms that do not take into consideration plausibility in their internal mechanisms cannot be evaluated with the current state of the art evaluation metrics.

DiCE4EL: Interpreting Process Predictions using a Milestone-Aware Counterfactual Approach

TLDR
An extension of DiCE, namely DiCE4EL (DiCE for Event Logs), is designed that can generate counterfactual explanations for process prediction, and an approach that supports deriving milestone-aware counterfactUAL explanations at key intermediate stages along process execution to promote interpretability is proposed.

When to choose ranked area integrals versus integrated gradient for explainable artificial intelligence – a comparison of algorithms

TLDR
The results show that the xRAI method performs better from a theoretical point of view, however, the IG method shows a good result with both model accuracy and prediction quality.

Estimating Crop Biophysical Parameters Using Machine Learning Algorithms and Sentinel-2 Imagery

TLDR
The results showed that RF was superior in estimating all three biophysical parameters, followed by GBM which was better in estimating LAI and CCC, but not LCab, where sPLS was relatively better, and it can be considered a good contender for operationalisation.

Evaluation of XAI on ALS 6-months mortality prediction

TLDR
The combination of the results of the qualitative and quantitative evaluations carried out in the experiment form the basis for a critical discussion of XAI methods properties and desiderata for healthcare applications, advocating for more inclusive and extensive XAI evaluation studies involving human experts.

References

SHOWING 1-10 OF 42 REFERENCES

A Tutorial on Learning with Bayesian Networks

  • D. Heckerman
  • Computer Science
    Innovations in Bayesian Networks
  • 1998
TLDR
Methods for constructing Bayesian networks from prior knowledge are discussed and methods for using data to improve these models are summarized, including techniques for learning with incomplete data.

Definitions, methods, and applications in interpretable machine learning

TLDR
This work defines interpretability in the context of machine learning and introduces the predictive, descriptive, relevant (PDR) framework for discussing interpretations, and introduces 3 overarching desiderata for evaluation: predictive accuracy, descriptive accuracy, and relevancy.

A Unified Approach to Interpreting Model Predictions

TLDR
A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.

A Survey of Methods for Explaining Black Box Models

TLDR
A classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system is provided to help the researcher to find the proposals more useful for his own work.

Faithful and Customizable Explanations of Black Box Models

TLDR
Model Understanding through Subspace Explanations (MUSE), a novel model agnostic framework which facilitates understanding of a given black box model by explaining how it behaves in subspaces characterized by certain features of interest, is proposed.

Exploring Interpretability for Predictive Process Analytics

Modern predictive analytics underpinned by machine learning techniques has become a key enabler to the automation of data-driven decision making. In the context of business process management,

Interpretability in HealthCare A Comparative Study of Local Machine Learning Interpretability Techniques

TLDR
This paper presents a comprehensive experimental evaluation of three recent and popular local model agnostic interpretability techniques, namely, LIME, SHAP and Anchors on different types of real-world healthcare data and shows that LIME achieves the lowest performance for the identity metric and the highestperformance for the separability metric across all datasets included in this study.

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

TLDR
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.

Learning Bayesian networks from big data with greedy search: computational complexity and efficient implementation

TLDR
It is found that using predictive instead of in-sample goodness-of-fit scores improves speed; and it is confirmed that it improves the accuracy of network reconstruction as well, as previously observed by Chickering and Heckerman.

Reliable Decision Support using Counterfactual Models

TLDR
This work proposes using a different learning objective that predicts counterfactuals instead of predicting outcomes under an existing action policy as in supervised learning, and introduces the Counterfactual Gaussian Process (CGP) to support decision-making in temporal settings.