• Computer Science
  • Published in NIPS 2015

Mind the Gap: A Generative Approach to Interpretable Feature Selection and Extraction

@inproceedings{Kim2015MindTG,
  title={Mind the Gap: A Generative Approach to Interpretable Feature Selection and Extraction},
  author={Been Kim and Julie A. Shah and Finale Doshi-Velez},
  booktitle={NIPS},
  year={2015}
}
We present the Mind the Gap Model (MGM), an approach for interpretable feature extraction and selection. By placing interpretability criteria directly into the model, we allow for the model to both optimize parameters related to interpretability and to directly report a global set of distinguishable dimensions to assist with further data exploration and hypothesis generation. MGM extracts distinguishing features on real-world datasets of animal features, recipes ingredients, and disease co… CONTINUE READING

Figures, Tables, and Topics from this paper.

Citations

Publications citing this paper.
SHOWING 1-10 OF 45 CITATIONS

ARTIFICIAL INTELLIGENCE MODELS

VIEW 7 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

On the Semantic Interpretability of Artificial Intelligence Models

VIEW 7 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

A Survey of Explainable AI Terminology

VIEW 3 EXCERPTS
CITES METHODS

Building more accurate decision trees with the additive tree

Ex-Twit: Explainable Twitter Mining on Health Data

VIEW 1 EXCERPT
CITES BACKGROUND

Graph-structured Sparse Mixed Models for Genetic Association with Confounding Factors Correction

VIEW 1 EXCERPT
CITES BACKGROUND

Multi-Dimensional Explanation of Reviews

VIEW 1 EXCERPT
CITES BACKGROUND

References

Publications referenced by this paper.
SHOWING 1-10 OF 33 REFERENCES

Feature Selection for Clustering: A Review

VIEW 1 EXCERPT

Sparse Subspace Clustering: Algorithm, Theory, and Applications.

VIEW 1 EXCERPT

Fully Sparse Topic Models

VIEW 1 EXCERPT

Fully sparse topic models , ” in

  • C. Wang Williamson, K. Heller, D. Blei
  • ECML - PKDD
  • 2012