# Significance Tests for Neural Networks

@article{Horel2019SignificanceTF, title={Significance Tests for Neural Networks}, author={Enguerrand Horel and Kay Giesecke}, journal={Econometrics: Econometric \& Statistical Methods - General eJournal}, year={2019} }

Neural networks underpin many of the best-performing AI systems. Their success is largely due to their strong approximation properties, superior predictive performance, and scalability. However, a major caveat is explainability: neural networks are often perceived as black boxes that permit little insight into how predictions are being made. We tackle this issue by developing a pivotal test to assess the statistical significance of the feature variables of a neural network. We propose a…

## 28 Citations

Sensitivity based Neural Networks Explanations

- Computer ScienceArXiv
- 2018

A way to assess the relative input features importance of a neural network based on the sensitivity of the model output with respect to its input is presented and implemented into an open-source Python package that allows its users to easily generate and visualize explanations for their neural networks.

An Interpretable Neural Network for Parameter Inference

- Computer ScienceArXiv
- 2021

An application to an asset pricing problem demonstrates how the PENN can be used to explore nonlinear risk dynamics in financial markets, and to compare empirical nonlinear effects to behavior posited by financial theory.

Asset Pricing with Neural Networks: A Variable Significant Test

- Computer Science
- 2020

The proposed test permits one to assess the statistical significance of the input variables in an MLP regression model and is applied to identify the most significant predictors in measuring asset risk premiums.

Asset Pricing with Neural Networks: A Variable Significant Test

- Computer Science
- 2020

The main results show the superiority of NN relative to the linear regression for forecasting excess returns benchmarking against naive zero forecasts and the most significant predictors are inflation, percent equity issuing, and default return spread.

Consistent Feature Selection for Analytic Deep Neural Networks

- Computer ScienceNeurIPS
- 2020

It is proved that for a wide class of networks, including deep feed-forward neural networks, convolutional neural Networks, and a major sub-class of residual neural network, the Adaptive Group Lasso selection procedure with GroupLasso as the base estimator is selection-consistent.

DeepVix: Explaining Long Short-Term Memory Network With High Dimensional Time Series Data

- Computer Science
- 2020

This paper aims to combine the strengths of both data science fields into a unified system, called DeepVix, which focuses on the visual explainability of the multivariate time-series predictions using neural networks.

Fundamental Issues Regarding Uncertainties in Artificial Neural Networks

- Computer ScienceArXiv
- 2020

This work provides a discussion of the standard interpretations of this problem and shows how a quantitative approach based upon long standing methods can be practically applied on the task of early diagnosis of dementing diseases using Magnetic resonance Imaging.

Neural Networks and Value at Risk

- Computer ScienceArXiv
- 2020

This design feature enables the balanced incentive recurrent neural network (RNN) to outperform the single incentive RNN as well as any other neural network or established approach by statistically and economically significant levels.

Fe b 20 20 Fundamental Issues Regarding Uncertainties in Artificial Neural Networks .

- Computer Science
- 2020

This work provides a discussion of the standard interpretations of this problem and shows how a quantitative approach based upon long standing methods can be practically applied on the task of early diagnosis of dementing diseases using Magnetic resonance Imaging.

Computationally Efficient Feature Significance and Importance for Machine Learning Models

- Computer ScienceArXiv
- 2019

A simple and computationally efficient significance test for the features of a machine learning model that identifies the statistically significant features as well as feature interactions of any order in a hierarchical manner, and generates a model-free notion of feature importance.

## References

SHOWING 1-10 OF 64 REFERENCES

Testing for neglected nonlinearity in time series models: A comparison of neural network methods and alternative tests

- Computer Science
- 1993

Learning in Artificial Neural Networks: A Statistical Perspective

- Computer ScienceNeural Computation
- 1989

Concepts and analytical results from the literatures of mathematical statistics, econometrics, systems identification, and optimization theory relevant to the analysis of learning in artificial neural networks are reviewed.

Illuminating the “black box”: a randomization approach for understanding variable contributions in artificial neural networks

- Computer Science
- 2002

Neural model identification, variable selection and model adequacy

- Mathematics
- 1999

In recent years an impressive array of publications has appeared claiming considerable successes of neural networks in modelling financial data but sceptical practitioners and statisticians are still…

Connectionist nonparametric regression: Multilayer feedforward networks can learn arbitrary mappings

- Computer ScienceNeural Networks
- 1990

Learning Important Features Through Propagating Activation Differences

- Computer ScienceICML
- 2017

DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input, is presented.

A comparison of neural networks and linear scoring models in the credit union environment

- Computer Science
- 1996

Some Asymptotic Results for Learning in Single Hidden-Layer Feedforward Network Models

- Mathematics
- 1989

Abstract We investigate the properties of a recursive estimation procedure (the method of “back-propagation”) for a class of nonlinear regression models (single hidden-layer feedforward network…

Understanding the difficulty of training deep feedforward neural networks

- Computer ScienceAISTATS
- 2010

The objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future.

Nonparametric Inferences for Additive Models

- Mathematics, Computer Science
- 2005

The generalized likelihood ratio (GLR) tests are extended to additive models, using the backfitting estimator, and it is proved that the GLR tests are asymptotically optimal in terms of rates of convergence for nonparametric hypothesis testing.