# ReLU Neural Networks of Polynomial Size for Exact Maximum Flow Computation

@inproceedings{Hertrich2021ReLUNN, title={ReLU Neural Networks of Polynomial Size for Exact Maximum Flow Computation}, author={Christoph Hertrich and Leon Sering}, year={2021} }

This paper studies the expressive power of artiﬁcial neural networks with rectiﬁed linear units. In order to study them as a model of real-valued computation, we introduce the concept of Max-Aﬃne Arithmetic Programs and show equivalence between them and neural networks concerning natural complexity measures. We then use this result to show that two fundamental combinatorial optimization problems can be solved with polynomial-size neural networks. First, we show that for any undirected graph…

## One Citation

### Training Fully Connected Neural Networks is ∃R-Complete

- Computer ScienceArXiv
- 2022

The algorithmic problem of finding the optimal weights and biases for a two-layer fully connected neural network to a given set of data points is considered and it is shown that even very simple networks are difficult to train.

### Training Fully Connected Neural Networks is $\exists\mathbb{R}$-Complete

- Computer Science
- 2022

The algorithmic problem of finding the optimal weights and biases for a two-layer fully connected neural network to a given set of data points is considered and it is shown that even very simple networks are difficult to train.

## References

SHOWING 1-10 OF 66 REFERENCES

### Training Fully Connected Neural Networks is ∃R-Complete

- Computer ScienceArXiv
- 2022

The algorithmic problem of finding the optimal weights and biases for a two-layer fully connected neural network to a given set of data points is considered and it is shown that even very simple networks are difficult to train.

### Neural networks with linear threshold activations: structure and algorithms

- Computer Science, MathematicsIPCO
- 2022

This article precisely characterize the class of functions that are representable by such neural networks and shows that 2 hidden layers are necessary and suﬃcient to represent any function representable in the class, and proposes a new class of neural networks that is called shortcut linear threshold networks.

### Lower bounds over Boolean inputs for deep neural networks with ReLU gates

- Computer ScienceElectron. Colloquium Comput. Complex.
- 2017

A study of high depth networks using ReLU gates which implement the function xmapsto x to understand the role of depth by showing size lowerbounds against such network architectures in parameter regimes hitherto unexplored.

### Tight Hardness Results for Training Depth-2 ReLU Networks

- Computer ScienceITCS
- 2021

It is proved that under reasonable hardness assumptions, any proper learning algorithm for finding the best fitting ReLU must run in time exponential in $1/\epsilon^2$, which implies the first separation between proper and improper algorithms for learning a ReLU.

### The Computational Complexity of ReLU Network Training Parameterized by Data Dimensionality

- Computer ScienceJ. Artif. Intell. Res.
- 2022

This work provides running time lower bounds in terms of W[1]-hardness for parameter d and proves that known brute-force strategies are essentially optimal (assuming the Exponential Time Hypothesis).

### Bounding and Counting Linear Regions of Deep Neural Networks

- Computer ScienceICML
- 2018

The results indicate that a deep rectifier network can only have more linear regions than every shallow counterpart with same number of neurons if that number exceeds the dimension of the input.

### On the Number of Linear Regions of Deep Neural Networks

- Computer ScienceNIPS
- 2014

We study the complexity of functions computable by deep feedforward neural networks with piecewise linear activations in terms of the symmetries and the number of linear regions that they have. Deep…

### A neural network approach to the maximum flow problem

- Computer ScienceIEEE Global Telecommunications Conference GLOBECOM '91: Countdown to the New Millennium. Conference Record
- 1991

An attempt is made to show how to apply neural network optimization techniques to solve the resulting linear programming problem in real time, an extended version of the linear programming network proposed by L.O. Chua and G.N. Lin (1984, 1985).

### Arithmetic Circuits: A survey of recent results and open questions

- Computer Science, MathematicsFound. Trends Theor. Comput. Sci.
- 2010

The goal of this monograph is to survey the field of arithmetic circuit complexity, focusing mainly on what it finds to be the most interesting and accessible research directions, with an emphasis on works from the last two decades.

### Understanding Deep Neural Networks with Rectified Linear Units

- Computer Science, MathematicsElectron. Colloquium Comput. Complex.
- 2017

The gap theorems hold for smoothly parametrized families of "hard" functions, contrary to countable, discrete families known in the literature, and a new lowerbound on the number of affine pieces is shown, larger than previous constructions in certain regimes of the network architecture.