Corpus ID: 235248205

An Explainable Probabilistic Classifier for Categorical Data Inspired to Quantum Physics

@article{Guidotti2021AnEP,
  title={An Explainable Probabilistic Classifier for Categorical Data Inspired to Quantum Physics},
  author={E. Guidotti and Alfio Ferrara},
  journal={ArXiv},
  year={2021},
  volume={abs/2105.13988}
}
This paper presents Sparse Tensor Classifier (STC), a supervised classification algorithm for categorical data inspired by the notion of superposition of states in quantum physics. By regarding an observation as a superposition of features, we introduce the concept of wave-particle duality in machine learning and propose a generalized framework that unifies the classical and the quantum probability. We show that STC possesses a wide range of desirable properties not available in most other… Expand

Tables from this paper

References

SHOWING 1-10 OF 22 REFERENCES
Cooperative neural networks (CoNN): Exploiting prior independence structure for improved classification
TLDR
Empirical evaluation of CoNN-sLDA on supervised text classification tasks demonstrate that the theoretical advantages of prior independence structure can be realized in practice - a 23 percent reduction in error on the challenging MultiSent data set compared to state-of-the-art. Expand
Learning to Explain: An Information-Theoretic Perspective on Model Interpretation
TLDR
An efficient variational approximation to the mutual information is developed, and the effectiveness of the method is shown on a variety of synthetic and real data sets using both quantitative metrics and human evaluation. Expand
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
TLDR
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction. Expand
Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation
TLDR
This work proposes a novel, polynomial-time approximation of Shapley values in deep neural networks, and shows that this method produces significantly better approximations of Shapleys values than existing state-of-the-art attribution methods. Expand
Neural Attentive Bag-of-Entities Model for Text Classification
TLDR
A Neural Attentive Bag-of-Entities model is proposed, which is a neural network model that performs text classification using entities in a knowledge base that combines simple high-recall entity detection based on a dictionary with a novel neural attention mechanism that enables the model to focus on a small number of unambiguous and relevant entities. Expand
The Diversified Ensemble Neural Network
TLDR
Results show the proposed principled ensemble technique by constructing the so-called diversified ensemble layer to combine multiple networks as individual modules can notably improve the accuracy and stability of the original neural networks with ignorable extra time and space overhead. Expand
Machine learning algorithm validation with a limited sample size
TLDR
The authors' simulations show that K-fold Cross-Validation (CV) produces strongly biased performance estimates with small sample sizes, and the bias is still evident with sample size of 1000, while Nested CV and train/test split approaches produce robust and unbiased performance estimates regardless of sample size. Expand
Scikit-learn: Machine Learning in Python
Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringingExpand
Machine learning in chemoinformatics and drug discovery.
TLDR
Basic principles and recent case studies are presented to demonstrate the utility of machine learning techniques in chemoinformatics analyses; and limitations and future directions are discussed to guide further development in this evolving field. Expand
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
TLDR
This work introduces AI Explainability 360, an open-source software toolkit featuring eight diverse and state-of-the-art explainability methods and two evaluation metrics, and provides a taxonomy to help entities requiring explanations to navigate the space of explanation methods. Expand
...
1
2
3
...