Gaussian Process Regression with Local Explanation

@article{Yoshikawa2020GaussianPR,
  title={Gaussian Process Regression with Local Explanation},
  author={Yuya Yoshikawa and Tomoharu Iwata},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.01669}
}
Gaussian process regression (GPR) is a fundamental model used in machine learning. Owing to its accurate prediction with uncertainty and versatility in handling various data structures via kernels, GPR has been successfully used in various applications. However, in GPR, how the features of an input contribute to its prediction cannot be interpreted. Herein, we propose GPR with local explanation, which reveals the feature contributions to the prediction of each sample, while maintaining the… 

Figures and Tables from this paper

Self-explaining variational posterior distributions for Gaussian Process models

TLDR
This work is Inspired by the idea of selfexplaining models, and introduces a corresponding concept for variational Gaussian Processes, which allows to incorporate both general prior knowledge about a target function as a whole and priorknowledge about the contribution of individual features.

Training Deep Models to be Explained with Fewer Examples

TLDR
This work proposes a method for training deep models such that their predictions are faithfully explained by explanation models with a small number of examples, and can be incorporated into any neural network-based prediction models.

Locally Sparse Neural Networks for Tabular Biomedical Data

TLDR
This work designs a locally sparse neural network where the local sparsity is learned to identify the subset of most relevant features for each sample, and reduces model overfitting in low-sample-size data and obtains an interpretable model.

References

SHOWING 1-10 OF 54 REFERENCES

Gaussian Processes for Machine Learning

TLDR
The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics, and deals with the supervised learning problem for both regression and classification.

Financial Applications of Gaussian Processes and Bayesian Optimization

TLDR
This article explores two methods that have undergone rapid development in recent years: Gaussian processes and Bayesian optimization and focuses on the Gaussian process regression, which is the core of Bayesian machine learning, and the issue of hyperparameter selection.

Variable selection for Gaussian processes via sensitivity analysis of the posterior predictive distribution

TLDR
This work proposes two novel variable selection methods for Gaussian process models that utilize the predictions of a full model in the vicinity of the training points and thereby rank the variables based on their predictive relevance.

Gaussian Process Regression Networks

TLDR
A new regression framework, Gaussian process regression networks (GPRN), is introduced, which combines the structural properties of Bayesian neural networks with the nonparametric exibility of Gaussian processes and derives both elliptical slice sampling and variational Bayes inference procedures for GPRN.

A Unified Approach to Interpreting Model Predictions

TLDR
A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.

Practical Bayesian Optimization of Machine Learning Algorithms

TLDR
This work describes new algorithms that take into account the variable cost of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation and shows that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms.

Localized Lasso for High-Dimensional Regression

TLDR
The localized Lasso is introduced, which is suited for learning models that are both interpretable and have a high predictive power in problems with high dimensionality and small sample size, and a simple yet efficient iterative least-squares based optimization procedure is proposed.

Granger-Causal Attentive Mixtures of Experts: Learning Important Features with Neural Networks

TLDR
The experiments show that the feature importance estimates provided by AMEs compare favourably to those provided by state-of-theart methods, that AMEs are significantly faster at estimating feature importance than existing methods, and that the associations discovered are consistent with those reported by domain experts.

A New View of Automatic Relevance Determination

TLDR
This paper furnishes an alternative means of expressing the ARD cost function using auxiliary functions that naturally addresses both of these issues and suggest alternative cost functions and update procedures for selecting features and promoting sparse solutions in a variety of general situations.
...