• Corpus ID: 219252996

A simple geometric proof for the benefit of depth in ReLU networks

@article{Amrami2019ASG,
  title={A simple geometric proof for the benefit of depth in ReLU networks},
  author={Asaf Amrami and Yoav Goldberg},
  journal={ArXiv},
  year={2019},
  volume={abs/2101.07126}
}
We present a simple proof for the benefit of depth in multi-layer feedforward network with rectified activation (“depth separation”). Specifically we present a sequence of classification problems indexed by m such that (a) for any fixed depth rectified network there exist an m above which classifying problem m correctly requires exponential number of parameters (in m); and (b) for any problem in the sequence, we present a concrete neural network with linear depth (inm) and small constant width… 

Figures from this paper

References

SHOWING 1-10 OF 15 REFERENCES

The Power of Depth for Feedforward Neural Networks

It is shown that there is a simple (approximately radial) function on $\reals^d$, expressible by a small 3-layer feedforward neural networks, which cannot be approximated by any 2-layer network, unless its width is exponential in the dimension.

On the Number of Linear Regions of Deep Neural Networks

We study the complexity of functions computable by deep feedforward neural networks with piecewise linear activations in terms of the symmetries and the number of linear regions that they have. Deep

Understanding Deep Neural Networks with Rectified Linear Units

The gap theorems hold for smoothly parametrized families of "hard" functions, contrary to countable, discrete families known in the literature, and a new lowerbound on the number of affine pieces is shown, larger than previous constructions in certain regimes of the network architecture.

Bounding and Counting Linear Regions of Deep Neural Networks

The results indicate that a deep rectifier network can only have more linear regions than every shallow counterpart with same number of neurons if that number exceeds the dimension of the input.

Complexity of Linear Regions in Deep Networks

The theory suggests that, even after training, the number of linear regions is far below exponential, an intuition that matches the empirical observations and concludes that the practical expressivity of neural networks is likely far below that of the theoretical maximum, and this gap can be quantified.

On the number of response regions of deep feed forward networks with piece-wise linear activations

This paper offers a framework for comparing deep and shallow models that belong to the family of piecewise linear functions based on computational geometry, and looks at a deep rectifier multi-layer perceptron with linear outputs units and compares it with a single layer version of the model.

Benefits of Depth in Neural Networks

This result is proved here for a class of nodes termed "semi-algebraic gates" which includes the common choices of ReLU, maximum, indicator, and piecewise polynomial functions, therefore establishing benefits of depth against not just standard networks with ReLU gates, but also convolutional networks with reLU and maximization gates, sum-product networks, and boosted decision trees.

On the Expressive Power of Deep Neural Networks

We propose a new approach to the problem of neural network expressivity, which seeks to characterize how structural properties of a neural network family affect the functions it is able to compute.

On the complexity of shallow and deep neural network classifiers

Upper and lower bounds on network complexity are established, based on the number of hidden units and on their activation functions, showing that deep architectures are able, with the same number of resources, to address more difficult classification problems.

Representation Benefits of Deep Feedforward Networks

This note provides a family of classification problems, indexed by a positive integer $k$, where all shallow networks with fewer than exponentially (in $k$) many nodes exhibit error at least $1/6$,