Corpus ID: 235732139

Class Introspection: A Novel Technique for Detecting Unlabeled Subclasses by Leveraging Classifier Explainability Methods

@article{Kage2021ClassIA,
  title={Class Introspection: A Novel Technique for Detecting Unlabeled Subclasses by Leveraging Classifier Explainability Methods},
  author={Patrick Kage and Pavlos Andreadis},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.01657}
}
Detecting latent structure within a dataset is a crucial step in performing analysis of a dataset. However, existing state-ofthe-art techniques for subclass discovery are limited: either they are limited to detecting very small numbers of outliers or they lack the statistical power to deal with complex data such as image or audio. This paper proposes a solution to this subclass discovery problem: by leveraging instance explanation methods, an existing classifier can be extended to detect latent… Expand

References

SHOWING 1-10 OF 22 REFERENCES
Detection and Mitigation of Rare Subclasses in Neural Network Classifiers
TLDR
The new approach is underpinned by an easy-to-compute commonality metric that supports the detection of rare subclasses, and comprises methods for reducing their impact during both model training and model exploitation. Expand
A Unified Approach to Interpreting Model Predictions
TLDR
A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches. Expand
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
TLDR
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction. Expand
Learning Important Features Through Propagating Activation Differences
TLDR
DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input, is presented. Expand
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
TLDR
This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets), and establishes the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks. Expand
Explainable artificial intelligence: A survey
TLDR
Recent developments in XAI in supervised learning are summarized, a discussion on its connection with artificial general intelligence is started, and proposals for further research directions are given. Expand
Toward Faithful Explanatory Active Learning with Self-explainable Neural Nets
From the user’s perspective, interaction in active learning is very opaque: the user only sees a sequence of instances to be labeled, and has no idea what the model believes or how it behaves.Expand
"Why Should I Trust Interactive Learners?" Explaining Interactive Queries of Classifiers to Users
TLDR
In each step, the learner explains its interactive query to the user, and she queries of any active classifier for visualizing explanations of the corresponding predictions, to boost the predictive and explanatory powers of and the trust into the learned model. Expand
A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise
TLDR
DBSCAN, a new clustering algorithm relying on a density-based notion of clusters which is designed to discover clusters of arbitrary shape, is presented which requires only one input parameter and supports the user in determining an appropriate value for it. Expand
Mixture Models
Mixture models are an interesting and flexible model family. The different uses of mixture models include for example generative component models, clustering and density estimation. Moreover, mixtureExpand
...
1
2
3
...