Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

@article{He2015DelvingDI,
  title={Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification},
  author={Kaiming He and X. Zhang and Shaoqing Ren and Jian Sun},
  journal={2015 IEEE International Conference on Computer Vision (ICCV)},
  year={2015},
  pages={1026-1034}
}
  • Kaiming HeX. Zhang Jian Sun
  • Published 6 February 2015
  • Computer Science
  • 2015 IEEE International Conference on Computer Vision (ICCV)
Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. [] Key Method PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on the learnable activation and advanced…

A Deep Convolutional Neural Network with Selection Units for Super-Resolution

  • Jae-Seok ChoiMunchurl Kim
  • Computer Science
    2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
  • 2017
The proposed deep network with SUs, called SelNet, was top-5th ranked in NTIRE2017 Challenge, which has a much lower computation complexity compared to the top-4 entries, and experiment results show that the proposed SelNet outperforms the authors' baseline only with ReLU, and other state-of-the-art deep-learning-based SR methods.

Empirical Evaluation of Rectified Activations in Convolutional Network

The experiments suggest that incorporating a non-zero slope for negative part in rectified activation units could consistently improve the results, and are negative on the common belief that sparsity is the key of good performance in ReLU.

FReLU: Flexible Rectified Linear Units for Improving Convolutional Neural Networks

  • Suo QiuBolun Cai
  • Computer Science
    2018 24th International Conference on Pattern Recognition (ICPR)
  • 2018
Experimental results show that FReLU achieves fast convergence and competitive performance on both plain and residual networks, and is designed to be simple and effective without exponential functions to maintain low-cost computation.

Overcoming Overfitting and Large Weight Update Problem in Linear Rectifiers: Thresholded Exponential Rectified Linear Units

Thresholded exponential rectified linear unit (TERELU) activation function that works better in alleviating in overfitting: large weight update problem, and also gives good amount of non-linearity as compared to other linear rectifiers.

Beyond ImageNet: Deep Learning in Industrial Practice

This chapter focuses on convolutional neural networks, which have since the seminal work of Krizhevsky et al. revolutionized image classification and started surpassing human performance on some benchmark data sets and can be successfully applied to other areas and problems with some local structure in the data.

Deep Learning without Shortcuts: Shaping the Kernel with Tailored Rectifiers

The method, which introduces negligible extra computational cost, achieves validation accuracies with deep vanilla networks that are competitive with ResNets (of the same width/depth), and significantly higher than those obtained with the Edge of Chaos (EOC) method.

Deep Transfer Learning for Art Classification Problems

This paper shows how DCNNs, which have been fine tuned on a large artistic collection, outperform the same architectures which are pre-trained on the ImageNet dataset only, when it comes to the classification of heritage objects from a different dataset.

Shallow and wide fractional max-pooling network for image classification

This work empirically investigates the architecture of popular ConvNet models and tries to widen the network enough in the fixed depth to achieve a similar or better performance.
...

References

SHOWING 1-10 OF 45 REFERENCES

Deeply-Supervised Nets

The proposed deeply-supervised nets (DSN) method simultaneously minimizes classification error while making the learning process of hidden layers direct and transparent, and extends techniques from stochastic gradient methods to analyze the algorithm.

ImageNet classification with deep convolutional neural networks

A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.

Convolutional neural networks at constrained time cost

  • Kaiming HeJian Sun
  • Computer Science
    2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2015
This paper investigates the accuracy of CNNs under constrained time cost, and presents an architecture that achieves very competitive accuracy in the ImageNet dataset, yet is 20% faster than “AlexNet” [14] (16.0% top-5 error, 10-view test).

Return of the Devil in the Details: Delving Deep into Convolutional Nets

It is shown that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost, and it is identified that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance.

Going deeper with convolutions

We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition

Very Deep Convolutional Networks for Large-Scale Image Recognition

This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.

DeepFace: Closing the Gap to Human-Level Performance in Face Verification

This work revisits both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network.

Some Improvements on Deep Convolutional Neural Network Based Image Classification

This paper summarizes the entry in the Imagenet Large Scale Visual Recognition Challenge 2013, which achieved a top 5 classification error rate and achieved over a 20% relative improvement on the previous year's winner.

On rectified linear units for speech processing

This work shows that it can improve generalization and make training of deep networks faster and simpler by substituting the logistic units with rectified linear units.