# Explainable Empirical Risk Minimization

@article{Jung2020ExplainableER, title={Explainable Empirical Risk Minimization}, author={A. Jung}, journal={ArXiv}, year={2020}, volume={abs/2009.01492} }

The widespread use of modern machine learning methods in decision making crucially depends on their interpretability or explainability. The human users (decision makers) of machine learning methods are often not only interested in getting accurate predictions or projections. Rather, as a decision-maker, the user also needs a convincing answer (or explanation) to the question of why a particular prediction was delivered. Explainable machine learning might be a legal requirement when used for… Expand

#### 3 Citations

Interpretable Machine Learning with an Ensemble of Gradient Boosting Machines

- Computer Science, Mathematics
- Knowl. Based Syst.
- 2021

A method for the local and global interpretation of a black-box model on the basis of the well-known generalized additive models is proposed, which provides weights of features in the explicit form and it is simply trained. Expand

An Imprecise SHAP as a Tool for Explaining the Class Probability Distributions under Limited Training Data

- Computer Science, Mathematics
- ArXiv
- 2021

An imprecising SHAP as a modification of the original SHAP is proposed for cases when the class probability distributions are imprecise and represented by sets of distributions, and a new approach for computing the marginal contribution of a feature fulfils the important efficiency property of Shapley values. Expand

Ensembles of Random SHAPs

- Computer Science, Mathematics
- ArXiv
- 2021

Ensemble-based modifications of the well-known SHapley Additive exPlanations (SHAP) method for the local explanation of a black-box model are proposed. The modifications aim to simplify SHAP which is… Expand

#### References

SHOWING 1-10 OF 21 REFERENCES

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

- Computer Science, Mathematics
- HLT-NAACL Demos
- 2016

LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction. Expand

On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation

- Computer Science, Medicine
- PloS one
- 2015

This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers by introducing a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. Expand

Machine Learning: Basic Principles

- Mathematics
- 2018

After formalizing the main building blocks of an ML problem, some popular algorithmic design patterns for ML methods are discussed and some main concepts of machine learning are introduced. Expand

Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation

- Computer Science
- 2017

The problems show that the GDPR lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless. Expand

The Elements of Statistical Learning

- Computer Science, Mathematics
- Technometrics
- 2003

Chapter 11 includes more case studies in other areas, ranging from manufacturing to marketing research, and a detailed comparison with other diagnostic tools, such as logistic regression and tree-based methods. Expand

Methods for interpreting and understanding deep neural networks

- Computer Science, Mathematics
- Digit. Signal Process.
- 2018

The second part of the tutorial focuses on the recently proposed layer-wise relevance propagation (LRP) technique, for which the author provides theory, recommendations, and tricks, to make most efficient use of it on real data. Expand

The ethics of algorithms: Mapping the debate

- Computer Science
- 2016

This paper makes three contributions to clarify the ethical importance of algorithmic mediation, including a prescriptive map to organise the debate, and assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms. Expand

Pattern Recognition and Machine Learning

- Computer Science, Mathematics
- Technometrics
- 2007

This book covers a broad range of topics for regular factorial designs and presents all of the material in very mathematical fashion and will surely become an invaluable resource for researchers and graduate students doing research in the design of factorial experiments. Expand

Toward Human-Understandable, Explainable AI

- Computer Science
- Computer
- 2018

The author introduces XAI concepts, and gives an overview of areas in need of further exploration—such as type-2 fuzzy logic systems—to ensure such systems can be fully understood and analyzed by the lay user. Expand

Components of Machine Learning: Binding Bits and FLOPS

- Computer Science
- ArXiv
- 2019

The mathematical structure of these three components, data, hypothesis space and loss function, are reviewed to discuss intrinsic trade-offs between statistical and computational properties of ML methods. Expand