Flexible, High Performance Convolutional Neural Networks for Image Classification

@inproceedings{Ciresan2011FlexibleHP,
  title={Flexible, High Performance Convolutional Neural Networks for Image Classification},
  author={Dan C. Ciresan and Ueli Meier and Jonathan Masci and Luca Maria Gambardella and J{\"u}rgen Schmidhuber},
  booktitle={IJCAI},
  year={2011}
}
We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. [] Key Result Test error rates on MNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs, respectively.

Figures and Tables from this paper

Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm
TLDR
Close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.
Convolutional Neural Network Committees for Handwritten Character Classification
TLDR
This work applies the same architecture to NIST SD 19, a more challenging dataset including lower and upper case letters, and obtains the best results published so far for both NIST digits and NIST letters.
Multi-column deep neural networks for image classification
TLDR
On the very competitive MNIST handwriting benchmark, this method is the first to achieve near-human performance and improves the state-of-the-art on a plethora of common image classification benchmarks.
Enhanced image classification with a fast-learning shallow convolutional neural network
TLDR
A neural network architecture and training method designed to enable very rapid training and low implementation complexity and has strong potential for applications requiring frequent retraining or online training.
An Improved Convolutional Neural Network Architecture for Image Classification
TLDR
The design and implementation of an improved convolutional neural network for image classification which was carefully crafted to avoid overfitting is presented, and a comparison versus the well-known Alexnet architecture is presented.
Deep learning for image classification on very small datasets using transfer learning
TLDR
The goal of this work is to show that a proper modified very deep model pre-trained on ImageNet for image classification can be used to fit very small dataset without severe overfitting.
Multi-column deep neural network for traffic sign classification
Very Deep Convolutional Networks for Large-Scale Image Recognition
TLDR
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Object Recognition with Multi-Scale Pyramidal Pooling Networks
TLDR
A Multi-Scale Pyramidal Pooling Network, featuring a novel pyramidal pooling layer at multiple scales and a novel encoding layer, which improves generalisation performance in comparison to similar neural network architectures, especially when training data is scarce.
ELMAENet: A Simple, Effective and Fast Deep Architecture for Image Classification
TLDR
A simple, effective and fast deep architecture called ELMAENet, which uses extreme learning machines auto-encoder (ELM-AE) to get the filters of convolutional layer, which no longer need parameter tuning but still has a good performance for image classification.
...
...

References

SHOWING 1-10 OF 42 REFERENCES
Performance and Scalability of GPU-Based Convolutional Neural Networks
TLDR
This paper presents the implementation of a framework for accelerating training and classification of arbitrary Convolutional Neural Networks (CNNs) on the GPU and describes the basic parts of a CNN and demonstrates the performance and scalability improvement that can be achieved by shifting the computation-intensive tasks of aCNN to the GPU.
A committee of neural networks for traffic sign classification
We describe the approach that won the preliminary phase of the German traffic sign recognition benchmark with a better-than-human recognition rate of 98.98%.We obtain an even better recognition rate
Large-scale object recognition with CUDA-accelerated hierarchical neural networks
  • Rafael Uetz, Sven Behnke
  • Computer Science
    2009 IEEE International Conference on Intelligent Computing and Intelligent Systems
  • 2009
TLDR
This work presents a hierarchical, locally-connected neural network model that is well-suited for large-scale, high-performance object recognition and creates a massively parallel implementation of the model which is executed on a state-of-the-art graphics card.
Evaluation of Pooling Operations in Convolutional Architectures for Object Recognition
TLDR
The aim is to gain insight into different functions by directly comparing them on a fixed architecture for several common object recognition tasks, and empirical results show that a maximum pooling operation significantly outperforms subsampling operations.
An Analysis of Single-Layer Networks in Unsupervised Feature Learning
TLDR
The results show that large numbers of hidden nodes and dense feature extraction are critical to achieving high performance—so critical, in fact, that when these parameters are pushed to their limits, they achieve state-of-the-art performance on both CIFAR-10 and NORB using only a single layer of features.
Deep, Big, Simple Neural Nets for Handwritten Digit Recognition
Good old online backpropagation for plain multilayer perceptrons yields a very low 0.35 error rate on the MNIST handwritten digits benchmark. All we need to achieve this best result so far are many
3D Object Recognition with Deep Belief Nets
TLDR
A new type of top-level model for Deep Belief Nets is introduced, a third-order Boltzmann machine, trained using a hybrid algorithm that combines both generative and discriminative gradients that substantially outperforms shallow models such as SVMs.
Learning Multiple Layers of Features from Tiny Images
TLDR
It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.
Gradient-based learning applied to document recognition
TLDR
This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task, and Convolutional neural networks are shown to outperform all other techniques.
Neocognitron for handwritten digit recognition
...
...