An Information-Theoretic Approach to Personalized Explainable Machine Learning

@article{Jung2020AnIA,
  title={An Information-Theoretic Approach to Personalized Explainable Machine Learning},
  author={Alexander Jung and Pedro Henrique Juliano Nardelli},
  journal={IEEE Signal Processing Letters},
  year={2020},
  volume={27},
  pages={825-829}
}
Automated decision making is used routinely throughout our every-day life. Recommender systems decide which jobs, movies, or other user profiles might be interesting to us. Spell checkers help us to make good use of language. Fraud detection systems decide if a credit card transactions should be verified more closely. Many of these decision making systems use machine learning methods that fit complex models to massive datasets. The successful deployment of machine learning (ML) methods to many… 

Figures from this paper

Explainable Empirical Risk Minimization

This paper regularizes an arbitrary hypothesis space using a personalized measure for the explainability of a particular predictor to learn predictors that are intrinsically explainable to a specific user.

Explainable Artificial Intelligence Approaches: A Survey

This work demonstrates popular XAI methods with a mutual case study/task, provides meaningful insight on quantifying explainability, and recommends paths towards responsible or human-centered AI using XAI as a medium to understand, compare, and correlate competitive advantages of popularXAI methods.

Predicting Common Audiological Functional Parameters (CAFPAs) as Interpretable Intermediate Representation in a Clinical Decision-Support System for Audiology

This study aims at predicting the expert generated CAFPA labels using three different machine learning models, namely the lasso regression, elastic nets, and random forests, and indicates an adequate prediction of the ten distinct CAFPAs.

Machine Learning Explainability from an Information-theoretic Perspective

Using information theory, this work represents finding the optimal explainer as a rate-distortion optimization problem that is compatible with post-hoc gradient-based interpretability methods.

A Multi-Dimensional Conceptualization Framework for Personalized Explanations in Recommender Systems 11-23

This work presents a multi-dimensional conceptualization framework for personalized explanations in RS, based on five dimensions, and uses this framework to systematically analyze and compare studies on personalized explainable recommendation.

A Systematic Review of Explainable Artificial Intelligence in Terms of Different Application Domains and Tasks

XAI methods are mostly developed for safety-critical domains worldwide, deep learning and ensemble models are being exploited more than other types of AI/ML models, visual explanations are more acceptable to end-users and robust evaluation metrics are being developed to assess the quality of explanations.

Occlusion-Based Explanations in Deep Recurrent Models for Biomedical Signals

This paper proposes a model agnostic explanation method, based on occlusion, that enables the learning of the input’s influence on the model predictions and specifically target problems involving the predictive analysis of time-series data and the models that are typically used to deal with data of such nature, i.e., recurrent neural networks.

On-demand Personalized Explanation for Transparent Recommendation

A transparent Recommendation and Interest Modeling Application that provides on-demand personalized explanations with varying levels of detail to meet the demands of different types of end-users, and offers suggestions for the design and appropriate use of personalized explanation interfaces in recommender systems.

Framework for the Identification of Rare Events via Machine Learning and IoT Networks

An industrial cyber-physical system (CPS) based on the Internet of Things (IoT) that is designed to detect rare events based on machine learning and an introduction to the solution to be developed by the FIREMAN consortium.

References

SHOWING 1-10 OF 39 REFERENCES

“Why Should I Trust You?”: Explaining the Predictions of Any Classifier

LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.

On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation

This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers by introducing a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks.

Explore, exploit, and explain: personalizing explainable recommendations with bandits

This work provides the first method that combines bandits and explanations in a principled manner and is able to jointly learn which explanations each user responds to; learn the best content to recommend for each user; and balance exploration with exploitation to deal with uncertainty.

Learning to Explain: An Information-Theoretic Perspective on Model Interpretation

An efficient variational approximation to the mutual information is developed, and the effectiveness of the method is shown on a variety of synthetic and real data sets using both quantitative metrics and human evaluation.

Anchors: High-Precision Model-Agnostic Explanations

We introduce a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, "sufficient" conditions for predictions. We

Statistical Learning with Sparsity: The Lasso and Generalizations

Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underlying signal in a set of data and extract useful and reproducible patterns from big datasets.

TaxiRec: Recommending Road Clusters to Taxi Drivers Using Ranking-Based Extreme Learning Machines

TaxiRec is proposed, a framework for evaluating and discovering the passenger-finding potentials of road clusters, which is incorporated into a recommender system for taxi drivers to seek passengers and can use with a training cluster selection algorithm to provide road cluster recommendations when taxi trajectory data is incomplete or unavailable.

Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation

The problems show that the GDPR lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless.

The Elements of Statistical Learning

Chapter 11 includes more case studies in other areas, ranging from manufacturing to marketing research, and a detailed comparison with other diagnostic tools, such as logistic regression and tree-based methods.

and M

  • I. Jordan, “Learning to explain: An information-theoretic perspective on model interpretation,” in Proc. 35th Int. Conf. Mach. Learn., Stockholm, Sweden
  • 2018