Corpus ID: 214693381

A copula-based visualization technique for a neural network

  title={A copula-based visualization technique for a neural network},
  author={Y. Kubo and Y. Komori and T. Okuyama and H. Tokieda},
  • Y. Kubo, Y. Komori, +1 author H. Tokieda
  • Published 2020
  • Computer Science, Mathematics
  • ArXiv
  • Interpretability of machine learning is defined as the extent to which humans can comprehend the reason of a decision. However, a neural network is not considered interpretable due to the ambiguity in its decision-making process. Therefore, in this study, we propose a new algorithm that reveals which feature values the trained neural network considers important and which paths are mainly traced in the process of decision-making. In the proposed algorithm, the score estimated by the correlation… CONTINUE READING

    Figures, Tables, and Topics from this paper.


    Interpretable Machine Learning
    • 180
    • PDF
    Distilling the Knowledge in a Neural Network
    • 4,590
    • PDF
    "Why Should I Trust You?": Explaining the Predictions of Any Classifier
    • 3,500
    • PDF
    Learning how to explain neural networks: PatternNet and PatternAttribution
    • 168
    • PDF
    On the interpretation of weight vectors of linear models in multivariate neuroimaging
    • 694
    • PDF
    Visualizing and Understanding Convolutional Networks
    • 8,595
    • PDF
    SmoothGrad: removing noise by adding noise
    • 466
    • PDF