# Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure

@article{Novello2022MakingSO, title={Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure}, author={Paul Novello and Thomas Fel and David Vigouroux}, journal={ArXiv}, year={2022}, volume={abs/2206.06219} }

This paper presents a new efﬁcient black-box attribution method based on Hilbert-Schmidt Independence Criterion (HSIC), a dependence measure based on Reproducing Kernel Hilbert Spaces (RKHS). HSIC measures the dependence between regions of an input image and the output of a model based on kernel embeddings of distributions. It thus provides explanations enriched by RKHS representation capabilities. HSIC can be estimated very efﬁciently, signiﬁcantly reducing the computational cost compared to…

## 2 Citations

### Diffusion Models as Artists: Are we Closing the Gap between Humans and Machines?

- Biology
- 2023

Overall, this study suggests that diffusion models have signiﬁcantly helped improve the quality of machine-generated drawings; however, a gap between humans and machines remains – in part explainable by discrepancies in visual strategies.

### CRAFT: Concept Recursive Activation FacTorization for Explainability

- Computer ScienceArXiv
- 2022

This work introduces 3 new ingredients to the automatic concept extraction literature: a recursive strategy to detect and decompose concepts across layers, a novel method for a more faithful estimation of concept importance using indices, and the use of implicit differentiation to unlock Concept Attribution Maps.

## References

SHOWING 1-10 OF 60 REFERENCES

### Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis

- Computer ScienceNeurIPS
- 2021

A novel attribution method which is grounded in Sensitivity Analysis and uses Sobol indices, and shows that the proposed method leads to favorable scores on standard benchmarks for vision (and language models) while drastically reducing the computing time compared to other black-box methods.

### Representativity and Consistency Measures for Deep Neural Network Explanations

- Computer ScienceArXiv
- 2020

A new procedure to compute two new measures, Relative Consistency ReCo and Mean Generalization M eGe, respectively for consistency and generalization of explanations are introduced, revealing an interesting link between gradient-based explanations methods and 1-Lipschitz networks.

### RISE: Randomized Input Sampling for Explanation of Black-box Models

- Computer ScienceBMVC
- 2018

The problem of Explainable AI for deep neural networks that take images as input and output a class probability is addressed and an approach called RISE that generates an importance map indicating how salient each pixel is for the model's prediction is proposed.

### Understanding Black-box Predictions via Influence Functions

- Computer ScienceICML
- 2017

This paper uses influence functions — a classic technique from robust statistics — to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction.

### “Why Should I Trust You?”: Explaining the Predictions of Any Classifier

- Computer ScienceNAACL
- 2016

LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.

### On Locality of Local Explanation Models

- EconomicsNeurIPS
- 2021

The formulation of neighbourhood reference distributions that improve the local interpretability of Shapley values are considered and it is found that the Nadaraya-Watson estimator, a well-studied kernel regressor, can be expressed as a self-normalised importance sampling estimator.

### Kernel-based ANOVA decomposition and Shapley effects -- Application to global sensitivity analysis

- Computer Science
- 2021

Two moment-independent sensitivity indices based on kernel-embeddings of probability distributions are introduced and it is shown that the RKHS framework makes it possible to exhibit a kernel-based ANOVA decomposition, the first time such a desirable property is proved for sensitivity indices apart from Sobol’ ones.

### Global Sensitivity Analysis with Dependence Measures

- MathematicsArXiv
- 2013

This paper establishes that this comparison to compare the output distribution with its conditional counterpart when one of the input variables is fixed yields previously proposed indices when it is performed with Csiszar f-divergences, as well as sensitivity indices which are well-known dependence measures between random variables.

### Measuring Statistical Dependence with Hilbert-Schmidt Norms

- Computer Science, MathematicsALT
- 2005

We propose an independence criterion based on the eigen-spectrum of covariance operators in reproducing kernel Hilbert spaces (RKHSs), consisting of an empirical estimate of the Hilbert-Schmidt norm…

### Hilbert Space Embeddings and Metrics on Probability Measures

- Computer Science, MathematicsJ. Mach. Learn. Res.
- 2010

It is shown that the distance between distributions under γk results from an interplay between the properties of the kernel and the distributions, by demonstrating that distributions are close in the embedding space when their differences occur at higher frequencies.