NEWRON: A New Generalization of the Artificial Neuron to Enhance the Interpretability of Neural Networks
@article{Siciliano2021NEWRONAN, title={NEWRON: A New Generalization of the Artificial Neuron to Enhance the Interpretability of Neural Networks}, author={F. Siciliano and Maria Sofia Bucarelli and Gabriele Tolomei and Fabrizio Silvestri}, journal={ArXiv}, year={2021}, volume={abs/2110.02775} }
In this work, we formulate NEWRON: a generalization of the McCulloch-Pitts neuron structure. This new framework aims to explore additional desirable properties of artificial neurons. We show that some specializations of NEWRON allow the network to be interpretable with no change in their expressiveness. By just inspecting the models produced by our NEWRON-based networks, we can understand the rules governing the task. Extensive experiments show that the quality of the generated models is better…
Figures and Tables from this paper
One Citation
References
SHOWING 1-10 OF 21 REFERENCES
A new type of neurons for machine learning
- BiologyInternational journal for numerical methods in biomedical engineering
- 2018
This work investigates the possibility of replacing the inner product of an input vector with a quadratic function of the input vector, thereby upgrading the first‐ order neuron to the second‐order neuron, empowering individual neurons and facilitating the optimization of neural networks.
NeuroX: A Toolkit for Analyzing Individual Neurons in Neural Networks
- Computer ScienceAAAI
- 2019
The toolkit provides several methods to identify salient neurons with respect to the model itself or an external task, and has a potential to serve as a springboard in various research directions, such as understanding the model, better architectural choices, model distillation and controlling data biases.
The perceptron: a probabilistic model for information storage and organization in the brain.
- BiologyPsychological review
- 1958
This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory.
What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models
- Computer Science, BiologyAAAI
- 2019
A comprehensive analysis of neurons and proposes two methods: Linguistic Correlation Analysis, based on a supervised method to extract the most relevant neurons with respect to an extrinsic task, and Cross-model Correlation analysis, an unsupervised method to Extract salient neurons w.r.t. the model itself.
Relative Attributing Propagation: Interpreting the Comparative Contributions of Individual Units in Deep Neural Networks
- Computer ScienceAAAI
- 2020
The proposed Relative Attributing Propagation (RAP), which decomposes the output predictions of DNNs with a new perspective of separating the relevant and irrelevant attributions according to the relative influence between the layers, makes it possible to interpret DNN's with much clearer and attentive visualizations of the separated attributions than the conventional explaining methods.
Deep Neural Decision Trees
- Computer ScienceArXiv
- 2018
This work presents Deep Neural Decision Trees (DNDT) -- tree models realised by neural networks, which can be easily implemented in NN toolkits, and trained with gradient descent rather than greedy splitting.
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
- Computer ScienceInf. Fusion
- 2020
Approximation by superpositions of a sigmoidal function
- Computer ScienceMath. Control. Signals Syst.
- 1989
In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real…
Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons
- Computer ScienceAAAI
- 2019
This paper proposes a knowledge transfer method via distillation of activation boundaries formed by hidden neurons and proposes an activation transfer loss that has the minimum value when the boundaries generated by the student coincide with those by the teacher.