Corpus ID: 3911355

Explanation and Justification in Machine Learning : A Survey Or

@inproceedings{Biran2017ExplanationAJ,
  title={Explanation and Justification in Machine Learning : A Survey Or},
  author={Or Biran and Courtenay V. Cotton},
  year={2017}
}
We present a survey of the research concerning explanation and justification in the Machine Learning literature and several adjacent fields. Within Machine Learning, we differentiate between two main branches of current research: interpretable models, and prediction interpretation and justification. 
Combinatorial Methods for Explainable AI
This short paper introduces an approach to producing explanations or justifications of decisions made by artificial intelligence and machine learning (AI/ML) systems, using methods derived from faultExpand
Building More Explainable Artificial Intelligence With Argumentation
TLDR
This paper proposes an argumentation-based approach to explainable AI, which has the potential to generate more comprehensive explanations than existing approaches. Expand
Weight of Evidence as a Basis for Human-Oriented Explanations
TLDR
This work takes a step towards reconciling machine explanations with those that humans produce and prefer by taking inspiration from the study of explanation in philosophy, cognitive science, and the social sciences. Expand
Some Insights Towards a Unified Semantic Representation of Explanation for eXplainable Artificial Intelligence
  • Ismail Baaj, J. Poli, Wassila Ouerdane
  • Computer Science
  • Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019)
  • 2019
TLDR
This paper focuses on a semantic representation of the content of an explanation that could be common to any kind of XAI, and investigates knowledge representations, and discusses the benefits of conceptual graph structures for being a basis to represent explanations in AI. Expand
A Survey on Explainability in Machine Reading Comprehension
TLDR
This paper presents a systematic review of benchmarks and approaches for explainability in Machine Reading Comprehension (MRC), and presents the evaluation methodologies to assess the performance of explainable systems. Expand
Towards making NLG a voice for interpretable Machine Learning
TLDR
It is shown that self-reported rating of NLG explanation was higher than that for a non-NLG explanation, but when tested for comprehension, the results were not as clear-cut showing the need for performing more studies to uncover the factors responsible for high-quality NLG explanations. Expand
Exploiting Language Instructions for Interpretable and Compositional Reinforcement Learning
TLDR
This work attempts to interpret the latent space from an RL agent to identify its current objective in a complex language instruction and shows that the classification process causes changes in the hidden states which makes them more easily interpretable, but also causes a shift in zero-shot performance to novel instructions. Expand
From Shallow to Deep Interactions Between Knowledge Representation, Reasoning and Machine Learning (Kay R. Amel group)
This paper proposes a tentative and original survey of meeting points between Knowledge Representation and Reasoning (KRR) and Machine Learning (ML), two areas which have been developing quiteExpand
Natural Language Generation Challenges for Explainable AI
  • E. Reiter
  • Computer Science
  • Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019)
  • 2019
TLDR
This paper discusses the challenges of good quality explanations of artificial intelligence reasoning from a Natural Language Generation (NLG) perspective, and highlights four specific NLG for XAI research challenges. Expand
From Shallow to Deep Interactions Between Knowledge Representation, Reasoning and Machine Learning
This paper proposes a tentative and original survey of meeting points between Knowledge Representation and Reasoning (KRR) and Machine Learning (ML), two areas which have been developing quiteExpand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 99 REFERENCES
Varieties of Justification in Machine Learning
  • D. Corfield
  • Mathematics, Computer Science
  • Minds and Machines
  • 2010
Forms of justification for inductive machine learning techniques are discussed and classified into four types. This is done with a view to introduce some of these techniques and their justificatoryExpand
A review of explanation methods for Bayesian networks
One of the key factors for the acceptance of expert systems in real-world domains is the ability to explain their reasoning (Buchanan & Shortliffe, 1984; Henrion & Druzdzel, 1990). This paper descr...
Human-Centric Justification of Machine Learning Predictions
TLDR
This work proposes a novel approach to producing justifications that is geared towards users without machine learning expertise, focusing on domain knowledge and on human reasoning, and utilizing natural language generation. Expand
Explanation and Reliability of Individual Predictions
TLDR
A general methodology for explaining individual predictions as well as for estimating their reliability, independent of the underlying model, is developed. Expand
Learning theory analysis for association rules and sequential event prediction
TLDR
A theoretical analysis for prediction algorithms based on association rules, which introduces a problem for which rules are particularly natural, called "sequential... Expand
Deriving Explanations and Implications for Constraint Satisfaction Problems
TLDR
It is shown that consistency methods can be used to generate inferences that support both functions and explanations take the form of trees that show the basis for assignments and deletions in terms of previous selections. Expand
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
TLDR
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction. Expand
Gaining insight through case-based explanation
TLDR
A knowledge-light approach to case-based explanations that works by selecting cases based on explanation utility and offering insights into the effects of feature-value differences is examined. Expand
Explaining Classifications For Individual Instances
TLDR
It is demonstrated that the generated explanations closely follow the learned models and a visualization technique is presented that shows the utility of the approach and enables the comparison of different prediction methods. Expand
Rationalizing Neural Predictions
TLDR
The approach combines two modular components, generator and encoder, which are trained to operate well together and specifies a distribution over text fragments as candidate rationales and these are passed through the encoder for prediction. Expand
...
1
2
3
4
5
...