The relative performance of ensemble methods with deep convolutional neural networks for image classification

@article{Ju2018TheRP,
  title={The relative performance of ensemble methods with deep convolutional neural networks for image classification},
  author={Cheng Ju and Aur{\'e}lien F. Bibaut and Mark J. van der Laan},
  journal={Journal of Applied Statistics},
  year={2018},
  volume={45},
  pages={2800 - 2818}
}
Artificial neural networks have been successfully applied to a variety of machine learning tasks, including image recognition, semantic segmentation, and machine translation. However, few studies fully investigated ensembles of artificial neural networks. In this work, we investigated multiple widely used ensemble methods, including unweighted averaging, majority voting, the Bayes Optimal Classifier, and the (discrete) Super Learner, for image recognition tasks, with deep neural networks as… 
Open-Source Neural Architecture Search with Ensemble and Pre-trained Networks
TLDR
The results show that a stacked generalization ensemble of heterogeneous models is the most effective approach to image classification within OpenNAS.
Enhanced Neural Architecture Search Using Super Learner and Ensemble Approaches
TLDR
The results show that a stacked generalization ensemble of heterogeneous models is the most effective approach to image classification within OpenNAS.
Convolutional Neural Network for Satellite Image Classification
TLDR
Effective methods for satellite image classification that are based on deep learning and using the convolutional neural network for features extraction by using AlexNet, VGG19, GoogLeNet and Resnet50 pretraining models are produced.
Fusion of Deep Learning Models for Improving Classification Accuracy of Remote Sensing Images
  • P. Deepan
  • Environmental Science, Computer Science
    JOURNAL OF MECHANICS OF CONTINUA AND MATHEMATICAL SCIENCES
  • 2019
TLDR
The intent of this paper is to study the effect of ensemble classifier constructed by combining three Deep Convolutional Neural Networks namely; CNN, VGG-16 and Res Inception models by using average feature fusion techniques.
Intra-Ensemble in Neural Networks
TLDR
This work proposes Intra-Ensemble, an end-to-end ensemble strategy with stochastic channel recombination operations to train several sub-networks simultaneously within one neural network, which enhances ensemble performance.
An Analysis on Ensemble Learning Optimized Medical Image Classification With Deep Convolutional Neural Networks
TLDR
It is concluded that the integration of ensemble learning techniques is a powerful method for any medical image classification pipeline to improve robustness and boost performance.
Rapid Training of Very Large Ensembles of Diverse Neural Networks
TLDR
This work captures the structural similarity between members of a neural network ensemble and train it only once, and thereafter, this knowledge is transferred to all members of the ensemble using function-preserving transformations so that these ensemble networks converge significantly faster as compared to training from scratch.
Efficient Regularization Framework for Histopathological Image Classification Using Convolutional Neural Networks
TLDR
The experimental study on the lymphomas histopathological dataset highlights the efficiency of the MobileNet2 network combined with the stochastic gradient descent (SGD) optimizer in terms of generalization.
Snapshot boosting: a fast ensemble framework for deep neural networks
TLDR
The results show that snapshot boosting can get a more balanced trade-off between training expenses and ensemble accuracy than other well-known ensemble methods.
All Together Now! The Benefits of Adaptively Fusing Pre-trained Deep Representations
TLDR
This work study the Mixture of Experts (MoE) approach for adaptively fusing multiple pre-trained models for each individual input image, finding that both performance and agreement between models vary across datasets.
...
...

References

SHOWING 1-10 OF 60 REFERENCES
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Very Deep Convolutional Networks for Large-Scale Image Recognition
TLDR
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Network In Network
TLDR
With enhanced local modeling via the micro network, the proposed deep network structure NIN is able to utilize global average pooling over feature maps in the classification layer, which is easier to interpret and less prone to overfitting than traditional fully connected layers.
Visualizing and Understanding Convolutional Networks
TLDR
A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark.
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
TLDR
Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Residual Networks are Exponential Ensembles of Relatively Shallow Networks
TLDR
This work introduces a novel interpretation of residual networks showing they are exponential ensembles, and suggests that in addition to describing neural networks in terms of width and depth, there is a third dimension: multiplicity, the size of the implicit ensemble.
Going deeper with convolutions
We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
TLDR
This work proposes a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length, and reveals one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of veryDeep networks.
Experiments with a New Boosting Algorithm
TLDR
This paper describes experiments carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems and compared boosting to Breiman's "bagging" method when used to aggregate various classifiers.
node2vec: Scalable Feature Learning for Networks
TLDR
In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods.
...
...