• Corpus ID: 235265726

Explanations for Monotonic Classifiers

@inproceedings{MarquesSilva2021ExplanationsFM,
  title={Explanations for Monotonic Classifiers},
  author={Joao Marques-Silva and Thomas S Gerspacher and Martin Cooper and Alexey Ignatiev and Nina Narodytska},
  booktitle={ICML},
  year={2021}
}
In many classification tasks there is a requirement of monotonicity. Concretely, if all else remains constant, increasing (resp. decreasing) the value of one or more features must not decrease (resp. increase) the value of the prediction. Despite comprehensive efforts on learning monotonic classifiers, dedicated approaches for explaining monotonic classifiers are scarce and classifier-specific. This paper describes novel algorithms for the computation of one formal explanation of a (black-box… 

Tables from this paper

On the Tractability of Explaining Decisions of Classifiers
TLDR
This work investigates the computational complexity of providing a formally-correct and minimal explanation of a decision taken by a classifier and shows that tractable classes coincide for abductive and contrastive explanations in the constrained or unconstrained settings.
On Explaining Random Forests with SAT
TLDR
The paper proposes a propositional encoding for computing explanations of RFs, thus enabling finding PI-explanations with a SAT solver, and demonstrates that the proposed SAT-based approach significantly outperforms existing heuristic approaches.
$L_p$ Isotonic Regression Algorithms Using an $L_0$ Approach
TLDR
For weighted points in d-dimensional space with coordinate-wise ordering, d ≥ 3, L0, L1 and L2 regressions can be found in only o(n 3 2 log n logU) time, improving on the previous best of Θ(n2 log n), and for unweighted points the time is O(n 4).
Towards Axiomatic, Hierarchical, and Symbolic Explanation for Deep Models
TLDR
A hierarchical and symbolic And-Or graph is proposed to objectively explain the internal logic encoded by a well-trained deep model for inference to define the objectiveness of an explainer model in game theory.

References

SHOWING 1-10 OF 52 REFERENCES
A Novel Framework for Constructing Partially Monotone Rule Ensembles
TLDR
This work presents a framework for monotone additive rule ensembles that is the first to cater for partial monotonicity (in some features) and demonstrates it by developing a partially monotones instance based classifier based on L1 cones.
Learning and classification of monotonic ordinal concepts
TLDR
This paper presents efficient, incremental algorithms for learning the classification rules from examples and shows that by adopting a monotonicity assumption of the output with respect to the input, inconsistencies among examples can be easily detected and the number of possible classification rules substantially reduced.
Fast and Flexible Monotonic Functions with Ensembles of Lattices
TLDR
This work learns ensembles of monotonic calibrated interpolated look-up tables that produce similar or better accuracy, while providing guaranteed monotonicity consistent with prior knowledge, smaller model size and faster evaluation.
Efficient Explanations With Relevant Sets
TLDR
The paper shows that the computation of subset-minimal δ-relevant sets is in NP, and can be solved with a polynomial number of calls to an NP oracle.
A Symbolic Approach to Explaining Bayesian Network Classifiers
We propose an approach for explaining Bayesian network classifiers, which is based on compiling such classifiers into decision functions that have a tractable and symbolic form. We introduce two
Enhanced Random Forest Algorithms for Partially Monotone Ordinal Classification
TLDR
Simulated and real datasets are used to perform the most comprehensive ordinal classification benchmarking in the monotone forest literature and proposed approaches are shown to reduce the bias induced by monotonisation and thereby improve accuracy.
Abduction-Based Explanations for Machine Learning Models
TLDR
A constraint-agnostic solution for computing explanations for any ML model that exploits abductive reasoning, and imposes the requirement that the ML model can be represented as sets of constraints using some target constraint reasoning system for which the decision problem can be answered with some oracle.
Counterexample-Guided Learning of Monotonic Neural Networks
TLDR
This work develops a counterexample-guided technique to provably enforce monotonicity constraints at prediction time, and proposes a technique to usemonotonicity as an inductive bias for deep learning.
Nearest Neighbour Classification with Monotonicity Constraints
TLDR
A modified nearest neighbour algorithm for the construction of monotone classifiers from data is proposed that often outperforms standard kNN in problems where the monotonicity constraints are applicable.
RULEM: A novel heuristic rule learning approach for ordinal classification with monotonicity constraints
TLDR
RULEM preserves the predictive power of a rule induction technique while guaranteeing monotone classification and two novel justifiability measures are introduced which allow to calculate the extent to which a classification model is in line with domain knowledge expressed in the form of monotonicity constraints.
...
1
2
3
4
5
...