Impact of Feedback Type on Explanatory Interactive Learning
@article{Hagos2022ImpactOF, title={Impact of Feedback Type on Explanatory Interactive Learning}, author={Misgina Tsighe Hagos and Kathleen Curran and Brian Mac Namee}, journal={ArXiv}, year={2022}, volume={abs/2209.12476} }
. Explanatory Interactive Learning (XIL) collects user feedback on visual model explanations to implement a Human-in-the-Loop (HITL) based interactive learning scenario. Different user feedback types will have different impacts on user experience and the cost associated with collecting feedback since different feedback types involve different levels of image annotation. Although XIL has been used to improve classification performance in multiple domains, the impact of different user feedback types on…
One Citation
Identifying Spurious Correlations and Correcting them with an Explanation-based Learning
- Computer ScienceArXiv
- 2022
This work presents a simple method to identify spurious correlations that have been learned by a model trained for image classification problems and removes the learned spurious correlations with an explanation based learning approach.
References
SHOWING 1-10 OF 17 REFERENCES
Machine Guides, Human Supervises: Interactive Learning with Global Explanations
- Computer ScienceArXiv
- 2020
Examining explanatory guided learning (XGL), a novel interactive learning strategy in which a machine guides a human supervisor toward selecting informative examples for a classifier, shows theoretically that global explanations are a viable approach for guiding supervisors.
Explanatory Interactive Machine Learning
- Computer ScienceAIES
- 2019
This work proposes the novel framework of explanatory interactive learning where, in each step, the learner explains its query to the user, and the user interacts by both answering the query and correcting the explanation.
Making deep neural networks right for the right scientific reasons by interacting with their explanations
- Computer ScienceNat. Mach. Intell.
- 2020
The novel learning setting of explanatory interactive learning is introduced and its benefits on a plant phenotyping research task are illustrated and it is demonstrated that explanatory interactiveLearning can help to avoid Clever Hans moments in machine learning.
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
- Computer ScienceIJCAI
- 2017
This work introduces a method for efficiently explaining and regularizing differentiable models by examining and selectively penalizing their input gradients, which provide a normal to the decision boundary.
Right for Better Reasons: Training Differentiable Models by Constraining their Influence Functions
- Computer ScienceAAAI
- 2021
This paper demonstrates how to make use of influence functions---a well known robust statistic---in the constraints to correct the model’s behaviour more effectively and showcases the effectiveness of RBR in correcting "Clever Hans"-like behaviour in real, high-dimensional domain.
Interactive machine learning
- Computer ScienceIUI '03
- 2003
An interactive machine-learning (IML) model that allows users to train, classify/view and correct the classifications and the Crayons tool embodies the notions of interactive machine learning is proposed.
Improving Neural Model Performance through Natural Language Feedback on Their Explanations
- Computer ScienceArXiv
- 2021
This work introduces MERCURIE, an interactive system that refines its explanations for a given reasoning task by getting human feedback in natural language, and generates graphs that have 40% fewer inconsistencies as compared with the off-the-shelf system.
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
- Computer Science2017 IEEE International Conference on Computer Vision (ICCV)
- 2017
This work proposes a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable, and shows that even non-attention based models learn to localize discriminative regions of input image.
Interactive and interpretable machine learning models for human machine collaboration
- Computer Science
- 2015
This thesis builds human-in-the-loop machine learning models and systems that compute and communicate machine learning results in ways that are compatible with the human decision-making process, and that can readily incorporate human experts’ domain knowledge.