• Corpus ID: 59604430

XOC: Explainable Observer-Classifier for Explainable Binary Decisions

@article{Alaniz2019XOCEO,
  title={XOC: Explainable Observer-Classifier for Explainable Binary Decisions},
  author={Stephan Alaniz and Zeynep Akata},
  journal={ArXiv},
  year={2019},
  volume={abs/1902.01780}
}
Explanations help develop a better understanding of the rationale behind the predictions of a deep neural network and improve trust. We propose an explainable observer-classifier framework that exposes the steps taken through the decision-making process in a transparent manner. Instead of assigning a label to an image in a single step, our model makes iterative binary sub-decisions, and as a byproduct reveals a decision tree in the form of an introspective explanation. In addition, our model… 

Figures and Tables from this paper

EDUCE: Explaining model Decisions through Unsupervised Concepts Extraction
TLDR
A new self-interpretable model that performs output prediction and simultaneously provides an explanation in terms of the presence of particular concepts in the input, based on a low-dimensional binary representation of the input.
Rationalization through Concepts
TLDR
Experiments show that ConRAT is the first to generate concepts that align with human rationalization while using only the overall label, and it outperforms state-of-the-art methods trained on each aspect label independently.
Making CNNs Interpretable by Building Dynamic Sequential Decision Forests with Top-down Hierarchy Learning
TLDR
Experimental results show that dDSDF not only achieves higher classification accuracy than its conuterpart, i.e., the original CNN, but has much better interpretability, as qualitatively it has plausible hierarchies and quantitatively it leads to more precise saliency maps.
Intrinsically Interpretable Image Recognition with Neural Prototype Trees
TLDR
The Neural Prototype Tree (ProtoTree), an intrinsically interpretable deep learning method for fine-grained image recognition that combines prototype learning with decision trees, and thus results in a globally interpretable model by design.
Neural Prototype Trees for Interpretable Fine-grained Image Recognition
TLDR
The Neural Prototype Tree (ProtoTree), an intrinsically interpretable deep learning method for fine-grained image recognition that combines prototype learning with decision trees, and thus results in a globally interpretable model by design.
NBDT: Neural-Backed Decision Trees
TLDR
This work proposes Neural-Backed Decision Trees (NBDTs), modified hierarchical classifiers that use trees constructed in weight-space that achieve interpretability and neural network accuracy and match state-of-the-art neural networks on CIFar10, CIFAR100, TinyImageNet, and ImageNet.
NBDT: Neural-Backed Decision Tree
TLDR
This work improves accuracy and interpretability using Neural-Backed Decision Trees, a differentiable sequence of decisions and a surrogate loss that forces the model to learn high-level concepts and lessens reliance on highlyuncertain decisions.
Measuring and Improving BERT’s Mathematical Abilities by Predicting the Order of Reasoning.
TLDR
This work fine-tunes BERT on a popular dataset for word math problems, AQuA-RAT, and proposes new pretext tasks for learning mathematical rules that achieve significantly better outcomes than data-driven baselines and even on-par with more tailored models.
Conservative Q-Improvement: Reinforcement Learning for an Interpretable Decision-Tree Policy
TLDR
This work proposes a novel algorithm which only increases tree size when the estimated discounted future reward of the overall policy would increase by a sufficient amount, and shows that its performance is comparable or superior to traditional tree-based approaches and that it yields a more succinct policy.
Visual Understanding through Natural Language
TLDR
A fundamental task as the intersection of language and vision: image captioning, in which a system receives an image as input and outputs a natural language sentence that describes the image, and how systems which provide natural language text about an image can be used to help humans better understand an AI system is considered.
...
...

References

SHOWING 1-10 OF 47 REFERENCES
Generating Visual Explanations
TLDR
A new model is proposed that focuses on the discriminating properties of the visible object, jointly predicts a class label, and explains why the predicted label is appropriate for the image, and generates sentences that realize a global sentence property, such as class specificity.
RISE: Randomized Input Sampling for Explanation of Black-box Models
TLDR
The problem of Explainable AI for deep neural networks that take images as input and output a class probability is addressed and an approach called RISE that generates an importance map indicating how salient each pixel is for the model's prediction is proposed.
Grounding Visual Explanations
TLDR
A phrase-critic model to refine generated candidate explanations augmented with flipped phrases to improve the textual explanation quality of fine-grained classification decisions on the CUB dataset by mentioning phrases that are grounded in the image.
Interpreting CNNs via Decision Trees
TLDR
The proposed decision tree is a decision tree, which clarifies the specific reason for each prediction made by the CNN at the semantic level, and organizes all potential decision modes in a coarse-to-fine manner to explain CNN predictions at different fine-grained levels.
Multimodal Explanations: Justifying Decisions and Pointing to the Evidence
TLDR
It is quantitatively shown that training with the textual explanations not only yields better textual justification models, but also better localizes the evidence that supports the decision, supporting the thesis that multimodal explanation models offer significant benefits over unimodal approaches.
Deep Neural Decision Forests
TLDR
A novel approach that unifies classification trees with the representation learning functionality known from deep convolutional networks, by training them in an end-to-end manner by introducing a stochastic and differentiable decision tree model.
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
TLDR
The task of providing explanations for the decisions of machine-learning models as a causal learning task is framed, and causal explanation (CXPlain) models that learn to estimate to what degree certain inputs cause outputs in another machine- learning model are trained.
Discovering Interpretable Representations for Both Deep Generative and Discriminative Models
TLDR
This work provides an interpretable lens for an existing model, and proposes two interpretability frameworks which rely on joint optimization for a representation which is both maximally informative about the side information and maximally compressive about the non-interpretable data factors.
Neural Module Networks
TLDR
A procedure for constructing and learning neural module networks, which compose collections of jointly-trained neural "modules" into deep networks for question answering, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.).
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
TLDR
This work proposes a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable, and shows that even non-attention based models learn to localize discriminative regions of input image.
...
...