• Corpus ID: 235313882

NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning

@article{Chang2021NODEGAMNG,
  title={NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning},
  author={Chun-Hao Chang and Rich Caruana and Anna Goldenberg},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.01613}
}
Deployment of machine learning models in real high-risk settings (e.g. healthcare) often depends not only on model’s accuracy but also on its fairness, robustness and interpretability. Generalized Additive Models (GAMs) have a long history of use in these high-risk domains, but lack desirable features of deep learning such as differentiability and scalability. In this work, we propose a neural GAM (NODE-GAM) and neural GAM (NODE-GAM) that scale well to large datasets, while remaining… 

Data-Efficient and Interpretable Tabular Anomaly Detection

A novel AD framework is proposed that adapts a white-box model class, Generalized Additive Models, to detect anomalies using a partial identification objective which naturally handles noisy or heterogeneous features and can incorporate a small amount of labeled data to further boost anomaly detection performances in semisupervised settings.

Attention-like feature explanation for tabular data

A modification of AFEX with incorporating an additional surrogate model which approximates the black-box model is proposed, which is trained end-to-end on a whole dataset only once such that it does not require to train neural networks again in the explanation stage.

Interpretability with full complexity by constraining feature information

A framework for extracting insight from the spectrum of approximate models serving as approximations that leverage variable amounts of information about the inputs is developed and demonstrated on a range of tabular datasets.

Higher-order Neural Additive Models: An Interpretable Machine Learning Model with Feature Interactions

A novel interpretable machine learning method called higher-order neural additive models (HONAM) and a feature interaction method for high interpretability and a novel hidden unit to effectively learn sharp-shape functions are proposed.

Sparse Interaction Additive Networks via Feature Interaction Detection and Sparse Selection

The proposed Sparse Interaction Additive Networks (SIAN) construct a bridge from these simple and interpretable models to fully connected neural networks and achieves competitive performance against state-of-the-art methods across multiple large-scale tabular datasets.

pureGAM: Learning an Inherently Pure Additive Model

Evaluations show that pureGAM outperforms other GAMs and has very competitive performance even compared with opaque models, and its interpretability remarkably outperforms competitors in terms of pureness.

Neural Basis Models for Interpretability

On a variety of tabular and image datasets, it is demonstrated that for interpretable machine learning, NBMs are the state-of-the-art in accuracy, model size, and, throughput and can easily model all higher-order feature interactions.

A Special Multivariate Polynomial Model for Diabetes Prediction and Analysis

This model has ability to show the relationship between each medical factor and diabetes with some polynomial curves, and the product of these curves and a specific constant is the decision-making process of the model.

A Concept and Argumentation based Interpretable Model in High Risk Domains

Experimental results on both open source benchmark dataset and real-word business dataset show that CAM is transparent and interpretable, and the knowledge inside the CAM is coherent with human understanding, and its interpretable approach can reach competitive results comparing with other state-of- art models.

TalkToModel: Understanding Machine Learning Models With Open Ended Dialogues

TalkToModel is introduced: an open-ended dialogue system for understanding machine learning models that understands user inputs on novel datasets and models with high accuracy and is presented as a new category of model understanding tools for practitioners.

References

SHOWING 1-10 OF 40 REFERENCES

Neural Additive Models: Interpretable Machine Learning with Neural Nets

Neural Additive Models (NAMs) are proposed which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models and are more accurate than widely used intelligible models such as logistic regression and shallow decision trees.

How Interpretable and Trustworthy are GAMs?

It is found that GAMs with high feature sparsity can miss patterns in the data and be unfair to rare subpopulations, and tree-based GAMs represent the best balance of sparsity, fidelity and accuracy and thus appear to be the most trustworthy GAM models.

Learning Global Additive Explanations for Neural Nets Using Model Distillation

This work proposes to leverage model distillation to learn global additive explanations that describe the relationship between input features and model predictions, taking the form of feature shapes, which are more expressive than feature attributions.

An Evaluation of the Doctor-Interpretability of Generalized Additive Models with Interactions

Doctors can correctly interpret risk functions of generalized additive models with interactions and also feel confident to do so, but the evaluation also identified several interpretability issues and it showed that interpretability of generalized additive models depends on the complexity of risk functions.

Neural Oblivious Decision Ensembles for Deep Learning on Tabular Data

This paper introduces Neural Oblivious Decision Ensembles (NODE), a new deep learning architecture, designed to work with any tabular data that generalizes ensembles of oblivious decision trees, but benefits from both end-to-end gradient-based optimization and the power of multi-layer hierarchical representation learning

Attention is not not Explanation

It is shown that even when reliable adversarial distributions can be found, they don’t perform well on the simple diagnostic, indicating that prior work does not disprove the usefulness of attention mechanisms for explainability.

InterpretML: A Unified Framework for Machine Learning Interpretability

InterpretML is an open-source Python package which exposes machine learning interpretability algorithms to practitioners and researchers by exposing multiple methods under a unified API, and by having a built-in, extensible visualization platform.

Quasi-hyperbolic momentum and Adam for deep learning

The quasi-hyperbolic momentum algorithm (QHM) is proposed as an extremely simple alteration of momentum SGD, averaging a plain SGD step with a momentum step, and a QH variant of Adam is proposed called QHAdam.

TabNet: Attentive Interpretable Tabular Learning

It is demonstrated that TabNet outperforms other neural network and decision tree variants on a wide range of non-performance-saturated tabular datasets and yields interpretable feature attributions plus insights into the global model behavior.

Learning Global Additive Explanations of Black-Box Models

Through careful experimentation, including a user study on expert users, it is shown qualitatively and quantitatively that learned global additive explanations are able to describe model behavior and yield insights about black-box models.