# Trainability and Data-dependent Initialization of Over-parameterized ReLU Neural Networks

@article{Shin2019TrainabilityAD, title={Trainability and Data-dependent Initialization of Over-parameterized ReLU Neural Networks}, author={Yeonjong Shin and George Em Karniadakis}, journal={ArXiv}, year={2019}, volume={abs/1907.09696} }

A neural network is said to be over-specified if its representational power is more than needed, and is said to be over-parameterized if the number of parameters is larger than the number of training data. In both cases, the number of neurons is larger than what it is necessary. In many applications, over-specified or over-parameterized neural networks are successfully employed and shown to be trained effectively. In this paper, we study the trainability of ReLU networks, a necessary condition… CONTINUE READING

#### Citations

##### Publications citing this paper.

SHOWING 1-2 OF 2 CITATIONS

#### References

##### Publications referenced by this paper.

SHOWING 1-10 OF 33 REFERENCES

## Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

VIEW 6 EXCERPTS

HIGHLY INFLUENTIAL

## Towards moderate overparameterization: global convergence guarantees for training shallow neural networks

VIEW 5 EXCERPTS

HIGHLY INFLUENTIAL

## Gradient Descent Finds Global Minima of Deep Neural Networks

VIEW 5 EXCERPTS

HIGHLY INFLUENTIAL

## Gradient Descent Provably Optimizes Over-parameterized Neural Networks

VIEW 5 EXCERPTS

HIGHLY INFLUENTIAL

## Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data

VIEW 6 EXCERPTS

HIGHLY INFLUENTIAL

## Adam: A Method for Stochastic Optimization

VIEW 4 EXCERPTS

HIGHLY INFLUENTIAL

## On the Convergence of Adam and Beyond

VIEW 2 EXCERPTS