Learn More
In this brief, the optimally pruned extreme learning machine (OP-ELM) methodology is presented. It is based on the original extreme learning machine (ELM) algorithm with additional steps to make it more robust and generic. The whole methodology is presented in detail and then applied to several regression and classification problems. Results for both(More)
In this paper an improvement of the optimally pruned extreme learning machine (OP-ELM) in the form of a L 2 regularization penalty applied within the OP-ELM is proposed. The OP-ELM originally proposes a wrapper methodology around the extreme learning machine (ELM) meant to reduce the sensitivity of the ELM to irrelevant variables and obtain more(More)
This paper proposes a methodology named OP-ELM, based on a recent development –the Extreme Learning Machine– decreasing drastically the training speed of networks. Variable selection is beforehand performed on the original dataset for proper results by OP-ELM: the network is first created using Extreme Learning Process, selection of the most relevant nodes(More)
The paper presents an approach for performing regression on large data sets in reasonable time, using an ensemble of extreme learning machines (ELMs). The main purpose and contribution of this paper are to explore how the evaluation of this ensemble of ELMs can be accelerated in three distinct ways: (1) training and model structure selection of the(More)
In time series prediction, one does often not know the properties of the underlying system generating the time series. For example, is it a closed system that is generating the time series or are there any external factors influencing the system? As a result of this, you often do not know beforehand whether a time series is stationary or nonstationary, and(More)
Computational intelligence techniques especially neural networks have been attracting a large number of researchers' attentions in the past three decades. It has been well known that conventional learning methods on neural networks have apparent drawbacks and limitations including: (1) slow learning speed, (2) trivial human tuned parameters, and (3)(More)
This paper presents a methodology named Optimally Pruned K-Nearest Neighbors (OP-KNNs) which has the advantage of competing with state-of-the-art methods while remaining fast. It builds a one hidden-layer feedforward neural network using K-Nearest Neighbors as kernels to perform regression. Multiresponse Sparse Regression (MRSR) is used in order to rank(More)