Training Neural Networks is ER-complete
@article{Abrahamsen2021TrainingNN, title={Training Neural Networks is ER-complete}, author={M. Abrahamsen and L. Kleist and Tillmann Miltzow}, journal={ArXiv}, year={2021}, volume={abs/2102.09798} }
Given a neural network, training data, and a threshold, it was known that it is NP-hard to find weights for the neural network such that the total error is below the threshold. We determine the algorithmic complexity of this fundamental problem precisely, by showing that it is ∃R-complete. This means that the problem is equivalent, up to polynomial time reductions, to deciding whether a system of polynomial equations and inequalities with integer coefficients and real unknowns has a solution… Expand
References
SHOWING 1-10 OF 32 REFERENCES
Training a 3-node neural network is NP-complete
- Computer Science, Mathematics
- Neural Networks
- 1992
- 755
- PDF
The computational intractability of training sigmoidal neural networks
- Computer Science
- IEEE Trans. Inf. Theory
- 1997
- 42
Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs
- Mathematics, Computer Science
- ICML
- 2017
- 240
- PDF
Approximation Algorithms for Training One-Node ReLU Neural Networks
- Computer Science
- IEEE Transactions on Signal Processing
- 2020
- 1
Smoothing the gap between NP and ER
- Computer Science, Mathematics
- 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS)
- 2020
- 6
- PDF