RMDL: Random Multimodel Deep Learning for Classification

@inproceedings{Kowsari2018RMDLRM,
  title={RMDL: Random Multimodel Deep Learning for Classification},
  author={Kamran Kowsari and Mojtaba Heidarysafa and Donald E. Brown and K. Meimandi and Laura E. Barnes},
  booktitle={International Conference on Information System and Data Mining},
  year={2018}
}
The continually increasing number of complex datasets each year necessitates ever improving machine learning methods for robust and accurate categorization of these data. This paper introduces Random Multimodel Deep Learning (RMDL): a new ensemble, deep learning approach for classification. Deep learning models have achieved state-of-the-art results across many domains. RMDL solves the problem of finding the best deep learning structure and architecture while simultaneously improving robustness… 

Figures and Tables from this paper

System Fusion with Deep Ensembles

This paper introduces a set of deep learning techniques for ensemble learning with dense, attention, and convolutional neural network layers that outperforms the existing state-of-the-art algorithms by a large margin.

Building Deep Random Ferns Without Backpropagation

A deep random ferns (d-RFs) model, in which extremely randomized f Ferns are connected to multilayers, allowing a high classification performance and a lightweight and fast structure is proposed.

Deep Associative Classifier

A Deep Associative Classifier (DAC), an ensemble of associative classifiers that transforms features in a deep model represen- tation and outperforms various state-of-the-art classifiers not only in terms of accuracy but also memory requirement and has fewer hyper-parameters to tune.

A Multi-blocked Image Classifier for Deep Learning

The proposed multi-blocked model is designed in a way that it uses a minimum number of parameters so that it is able to run on a Graphical Processing Unit (GPU), which requires less power.

An Efficient Data Classification Decision Based on Multimodel Deep Learning

The final experimental results show that the proposed model fusion method can achieve not only improved classification accuracy but also good classification effects on a variety of data sets.

Layer-Wise Relevance Propagation Based Sample Condensation for Kernel Machines

This paper proposes a novel approach to sample condensation for kernel machines, preferably without impairing the classification performance, using the neural network interpretation of kernel machines for this purpose.

An Ensemble of Simple Convolutional Neural Network Models for MNIST Digit Recognition

It is reported that a very high accuracy on the MNIST test set can be achieved by using simple convolutional neural network (CNN) models, which is one of the state-of-the-art results.

A Novel Framework for Neural Architecture Search in the Hill Climbing Domain

A new framework for neural architecture search based on a hill-climbing procedure using morphism operators that makes use of a novel gradient update scheme is proposed that can search in a broader search space which subsequently yields competitive results.

Leaf Recognition Based on Joint Learning Multiloss of Multimodel Convolutional Neural Networks: A Testing for Vietnamese Herb

A new modification of multi-CNN ensemble training is investigated by combining multiloss functions from state-of-the-art deep CNN architectures for leaf image recognition by introducing a multimodel approach based on a combination of loss functions from the EfficientNet and MobileNet to generalize aMultiloss function.
...

References

SHOWING 1-10 OF 60 REFERENCES

Deep Learning using Linear Support Vector Machines

The results using L2-SVMs show that by simply replacing softmax with linear SVMs gives significant gains on popular deep learning datasets MNIST, CIFAR-10, and the ICML 2013 Representation Learning Workshop's face expression recognition challenge.

Deep Forest: Towards An Alternative to Deep Neural Networks

In this paper, we propose gcForest, a decision tree ensemble approach with performance highly competitive to deep neural networks in a broad range of tasks. In contrast to deep neural networks which

HDLTex: Hierarchical Deep Learning for Text Classification

Hierarchical Deep Learning for Text classification employs stacks of deep learning architectures to provide specialized understanding at each level of the document hierarchy.

ImageNet classification with deep convolutional neural networks

A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.

Deep Learning

Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.

Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations

The convolutional deep belief network is presented, a hierarchical generative model which scales to realistic image sizes and is translation-invariant and supports efficient bottom-up and top-down probabilistic inference.

Maxout Networks

A simple new model called maxout is defined designed to both facilitate optimization by dropout and improve the accuracy of dropout's fast approximate model averaging technique.

Multi-column deep neural network for traffic sign classification

Recurrent Convolutional Neural Networks for Text Classification

A recurrent convolutional neural network is introduced for text classification without human-designed features to capture contextual information as far as possible when learning word representations, which may introduce considerably less noise compared to traditional window-based neural networks.

BinaryConnect: Training Deep Neural Networks with binary weights during propagations

BinaryConnect is introduced, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated, and near state-of-the-art results with BinaryConnect are obtained on the permutation-invariant MNIST, CIFAR-10 and SVHN.
...