Corpus ID: 221470230

Explainable Empirical Risk Minimization

  title={Explainable Empirical Risk Minimization},
  author={A. Jung},
  • A. Jung
  • Published 2020
  • Computer Science, Mathematics
  • ArXiv
The widespread use of modern machine learning methods in decision making crucially depends on their interpretability or explainability. The human users (decision makers) of machine learning methods are often not only interested in getting accurate predictions or projections. Rather, as a decision-maker, the user also needs a convincing answer (or explanation) to the question of why a particular prediction was delivered. Explainable machine learning might be a legal requirement when used for… Expand

Figures from this paper

Interpretable Machine Learning with an Ensemble of Gradient Boosting Machines
A method for the local and global interpretation of a black-box model on the basis of the well-known generalized additive models is proposed, which provides weights of features in the explicit form and it is simply trained. Expand
An Imprecise SHAP as a Tool for Explaining the Class Probability Distributions under Limited Training Data
An imprecising SHAP as a modification of the original SHAP is proposed for cases when the class probability distributions are imprecise and represented by sets of distributions, and a new approach for computing the marginal contribution of a feature fulfils the important efficiency property of Shapley values. Expand
Ensembles of Random SHAPs
Ensemble-based modifications of the well-known SHapley Additive exPlanations (SHAP) method for the local explanation of a black-box model are proposed. The modifications aim to simplify SHAP which isExpand


"Why Should I Trust You?": Explaining the Predictions of Any Classifier
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction. Expand
On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers by introducing a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. Expand
Machine Learning: Basic Principles
After formalizing the main building blocks of an ML problem, some popular algorithmic design patterns for ML methods are discussed and some main concepts of machine learning are introduced. Expand
Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation
The problems show that the GDPR lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless. Expand
The Elements of Statistical Learning
  • E. Ziegel
  • Computer Science, Mathematics
  • Technometrics
  • 2003
Chapter 11 includes more case studies in other areas, ranging from manufacturing to marketing research, and a detailed comparison with other diagnostic tools, such as logistic regression and tree-based methods. Expand
Methods for interpreting and understanding deep neural networks
The second part of the tutorial focuses on the recently proposed layer-wise relevance propagation (LRP) technique, for which the author provides theory, recommendations, and tricks, to make most efficient use of it on real data. Expand
The ethics of algorithms: Mapping the debate
This paper makes three contributions to clarify the ethical importance of algorithmic mediation, including a prescriptive map to organise the debate, and assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms. Expand
Pattern Recognition and Machine Learning
  • R. Neal
  • Computer Science, Mathematics
  • Technometrics
  • 2007
This book covers a broad range of topics for regular factorial designs and presents all of the material in very mathematical fashion and will surely become an invaluable resource for researchers and graduate students doing research in the design of factorial experiments. Expand
Toward Human-Understandable, Explainable AI
The author introduces XAI concepts, and gives an overview of areas in need of further exploration—such as type-2 fuzzy logic systems—to ensure such systems can be fully understood and analyzed by the lay user. Expand
Components of Machine Learning: Binding Bits and FLOPS
The mathematical structure of these three components, data, hypothesis space and loss function, are reviewed to discuss intrinsic trade-offs between statistical and computational properties of ML methods. Expand