Deep Residual Networks with Exponential Linear Unit

@article{Shah2016DeepRN,
  title={Deep Residual Networks with Exponential Linear Unit},
  author={Anish Shah and Eashan Kadam and Hena Shah and S. Shinde},
  journal={ArXiv},
  year={2016},
  volume={abs/1604.04112}
}
  • Anish Shah, Eashan Kadam, +1 author S. Shinde
  • Published 2016
  • Computer Science
  • ArXiv
  • The depth of convolutional neural networks is a crucial ingredient for reduction in test errors on benchmarks like ImageNet and COCO. However, training a neural network becomes difficult with increasing depth. Problems like vanishing gradient and diminishing feature reuse are quite trivial in very deep convolutional neural networks. The notable recent contributions towards solving these problems and simplifying the training of very deep models are Residual and Highway Networks. These networks… CONTINUE READING

    Figures, Tables, and Topics from this paper.

    Weighted residuals for very deep networks
    • 15
    • PDF
    Parametric Exponential Linear Unit for Deep Convolutional Neural Networks
    • 98
    • Highly Influenced
    • PDF
    Compressing Deep Neural Networks via Layer Fusion
    High Performance SqueezeNext for CIFAR-10

    References

    Publications referenced by this paper.
    SHOWING 1-4 OF 4 REFERENCES
    Deep Residual Learning for Image Recognition
    • 50,533
    • Highly Influential
    • PDF
    Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
    • 18,924
    • Highly Influential
    • PDF
    Understanding the difficulty of training deep feedforward neural networks
    • 8,543
    • Highly Influential
    • PDF
    Highway Networks
    • 720
    • Highly Influential
    • PDF