Corpus ID: 222379942

Altruist: Argumentative Explanations through Local Interpretations of Predictive Models

@article{Mollas2020AltruistAE,
  title={Altruist: Argumentative Explanations through Local Interpretations of Predictive Models},
  author={Ioannis Mollas and Nick Bassiliades and Grigorios Tsoumakas},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.07650}
}
  • Ioannis Mollas, Nick Bassiliades, Grigorios Tsoumakas
  • Published 2020
  • Computer Science
  • ArXiv
  • Interpretable machine learning is an emerging field providing solutions on acquiring insights into machine learning models' rationale. It has been put in the map of machine learning by suggesting ways to tackle key ethical and societal issues. However, existing techniques of interpretable machine learning are far from being comprehensible and explainable to the end user. Another key issue in this field is the lack of evaluation and selection criteria, making it difficult for the end user to… CONTINUE READING

    Figures and Tables from this paper.

    References

    SHOWING 1-10 OF 36 REFERENCES
    Towards Robust Interpretability with Self-Explaining Neural Networks
    • 168
    • PDF
    "Why Should I Trust You?": Explaining the Predictions of Any Classifier
    • 3,500
    • PDF
    Argument-Based Machine Learning
    • 45
    A Unified Approach to Interpreting Model Predictions
    • 1,594
    • Highly Influential
    • PDF
    Using Argumentation to Improve Classification in Natural Language Problems
    • 18
    • PDF
    Argumentation for Machine Learning: A Survey
    • 10
    • PDF
    On Attribution of Recurrent Neural Network Predictions via Additive Decomposition
    • 11
    • PDF
    Knowledge Discovery Via Multiple Models
    • 112
    Explanation for Case-Based Reasoning via Abstract Argumentation
    • 8
    • PDF
    Argumentation in artificial intelligence
    • 863
    • PDF