Induction of Non-monotonic Logic Programs To Explain Statistical Learning Models

@article{Shakerin2019InductionON,
  title={Induction of Non-monotonic Logic Programs To Explain Statistical Learning Models},
  author={Farhad Shakerin},
  journal={ArXiv},
  year={2019},
  volume={abs/1909.09017},
  pages={379-388}
}
  • Farhad Shakerin
  • Published in ICLP Technical Communications 2019
  • Mathematics, Computer Science
  • ArXiv
  • We present a fast and scalable algorithm to induce non-monotonic logic programs from statistical learning models. We reduce the problem of search for best clauses to instances of the High-Utility Itemset Mining (HUIM) problem. In the HUIM problem, feature values and their importance are treated as transactions and utilities respectively. We make use of TreeExplainer, a fast and scalable implementation of the Explainable AI tool SHAP, to extract locally important features and their weights from… CONTINUE READING

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 20 REFERENCES

    A Unified Approach to Interpreting Model Predictions

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    A survey of incremental high-utility itemset mining

    VIEW 2 EXCERPTS

    A new algorithm to automate inductive learning of default theories

    VIEW 3 EXCERPTS

    ALEPH in SWI-Prolog. Available at https://github.com/friguzzi/aleph

    • F. Riguzzi
    • 2016