Nima Reyhani

Learn More
In this paper, a global methodology for the long-term prediction of time series is proposed. This methodology combines direct prediction strategy and sophisticated input selection criteria: k-nearest neighbors approximation method (k-NN), mutual information (MI) and nonparametric noise estimation (NNE). A global input selection strategy that combines(More)
This paper presents a comparison between direct and recursive prediction strategies. In order to perform the input selection, an approach based on mutual information is used. The mutual information is computed between all the possible input sets and the outputs. Least Squares Support Vector Machines are used as non-linear models to avoid local minima(More)
This paper presents a new method for the selection of the two hyperparameters of Least Squares Support Vector Machine (LS-SVM) approximators with Gaussian Kernels. The two hyperparameters are the width σ of the Gaussian kernels and the regularization parameter λ. For different values of σ, a Nonparametric Noise Estimator (NNE) is introduced to estimate the(More)
We applied a multiple kernel learning (MKL) method based on information-theoretic optimization to speaker recognition. Most of the kernel methods applied to speaker recognition systems require a suitable kernel function and its parameters to be determined for a given data set. In contrast, MKL eliminates the need for strict determination of the kernel(More)
This paper presents k-NN as an approximator for time series prediction problems. The main advantage of this approximator is its simplicity. Despite the simplicity, k-NN can be used to perform input selection for nonlinear models and it also provides accurate approximations. Three model structure selection methods are presented: Leave-one-out, Bootstrap and(More)
In this paper, input selection is performed using two different approaches. The first approach is based on the Gamma test. This test estimates the mean square error (MSE) that can be achieved without overfitting. The best set of inputs is the one that minimises the result of the Gamma test. The second method estimates the Mutual Information between a set of(More)
In this paper, the problem of an optimal transformation of the input space for function approximation problems is addressed. The transformation is defined determining the Mahalanobis matrix that minimizes the variance of noise. To compute variance of the noise, a nonparametric estimator called the Delta Test paradigm is used. The proposed approach is(More)
Kernel methods are known to be effective for nonlinear multivariate analysis. One of the main issues in the practical use of kernel methods is the selection of kernel. There have been a lot of studies on kernel selection and kernel learning. Multiple kernel learning (MKL) is one of the promising kernel optimization approaches. Kernel methods are applied to(More)
Kernel methods have been successfully used in many practical machine learning problems. Choosing a suitable kernel is left to the practitioner. A common way to an automatic selection of optimal kernels is to learn a linear combination of element kernels. In this paper, a novel framework of multiple kernel learning is proposed based on conditional entropy(More)