Abstract Interpretation-Based Feature Importance for SVMs

@article{Pal2022AbstractIF,
  title={Abstract Interpretation-Based Feature Importance for SVMs},
  author={Abhinanda Pal and Francesco Ranzato and Caterina Urban and Marco Zanella},
  journal={ArXiv},
  year={2022},
  volume={abs/2210.12456}
}
We propose a symbolic representation for support vector machines (SVMs) by means of abstract interpretation, a well-known and successful technique for designing and implementing static program analyses. We leverage this abstraction in two ways: (1) to enhance the interpretability of SVMs by deriving a novel feature importance measure, called abstract feature importance (AFI), that does not depend in any way on a given dataset of the accuracy of the SVM and is very fast to compute, and (2) for… 

Figures and Tables from this paper

Robustness Verification of Support Vector Machines

The experimental results of the prototype SVM robustness verifier appear to be encouraging: this automated verification is fast, scalable and shows significantly high percentages of provable robustness on the test set of MNIST, in particular compared to the analogous provable Robustness of neural networks.

Visualizing the Feature Importance for Black Box Models

Local feature importance is introduced as a local version of a recent model-agnostic global feature importance method and two visual tools are proposed: partial importance (PI) and individual conditional importance (ICI) plots which visualize how changes in a feature affect the model performance on average, as well as for individual observations.

A Review of Formal Methods applied to Machine Learning

A comprehensive and detailed review of the formal methods developed so far for machine learning, highlighting their strengths and limitations and offering perspectives for future research directions towards the formal verification of machine learning systems.

Unrestricted permutation forces extrapolation: variable importance requires at least one more model, or there is no free variable importance

This paper reviews and advocates against the use of permute-and-predict methods for interpreting black box functions and describes how breaking dependencies between features in hold-out data places undue emphasis on sparse regions of the feature space by forcing the original model to extrapolate to regions where there is little to no data.

A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI

A review on interpretabilities suggested by different research works and categorize them is provided, hoping that insight into interpretability will be born with more considerations for medical practices and initiatives to push forward data-based, mathematically grounded, and technically grounded medical education are encouraged.

Perfectly parallel fairness certification of neural networks

This paper proposes a perfectly parallel static analysis for certifying fairness of feed-forward neural networks used for classification of tabular data and designs the analysis to be sound, in practice also exact, and configurable in terms of scalability and precision, thereby enabling pay-as-you-go certification.

A Unified Approach to Interpreting Model Predictions

A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.

Fairness-Aware Training of Decision Trees by Abstract Interpretation

The experimental results show that the fairness-aware learning method is able to train tree models exhibiting a high degree of individual fairness with respect to the natural state-of-the-art CART trees and random forests.

An Introduction to Support Vector Machines and Other Kernel-based Learning Methods

This is the first comprehensive introduction to Support Vector Machines (SVMs), a new generation learning system based on recent advances in statistical learning theory, and will guide practitioners to updated literature, new applications, and on-line software.

Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints

A program denotes computations in some universe of objects. Abstract interpretation of programs consists in using that denotation to describe computations in another universe of abstract objects, so