Learn More
In this paper, we use a unified loss function, called the soft insensitive loss function, for Bayesian support vector regression. We follow standard Gaussian processes for regression to set up the Bayesian framework, in which the unified loss function is used in the likelihood evaluation. Under this framework, the maximum a posteriori estimate of the(More)
Sequential minimal optimization (SMO) is one popular algorithm for training support vector machine (SVM), but it still requires a large amount of computation time for solving large size problems. This paper proposes one parallel implementation of SMO for training SVM. The parallel SMO is developed using message passing interface (MPI). Specifically, the(More)
In this paper, we apply popular Bayesian techniques on support vector classifier. We propose a novel differentiable loss function called trigonometric loss function with the desirable characteristics of natural normalization in the likelihood function, and then follow standard Gaussian processes techniques to set up a Bayesian framework. In this framework,(More)
In this paper, we give an efficient method for computing the leave-one-out (LOO) error for support vector machines (SVMs) with Gaussian kernels quite accurately. It is particularly suitable for iterative decomposition methods of solving SVMs. The importance of various steps of the method is illustrated in detail by showing the performance on six benchmark(More)
531 Fig. 7. Representation of a deterministic grid. for the constrained optimization problem, which is known to be more efficient than the method using a penalty function. For various types of Stewart platforms, m = 45 is sufficient to guarantee the global maximum from simulations. GDIF and GDIM can be obtained as 1.0 and 1:1547 [m 01 ], respectively. From(More)
This paper describes an improved algorithm for the numerical solution to the support vector machine (SVM) classification problem for all values of the regularization parameter C . The algorithm is motivated by the work of Hastie and follows the main idea of tracking the optimality conditions of the SVM solution for ascending value of C . It differs from(More)
Feature selection is an important aspect of solving data-mining and machine-learning problems. This paper proposes a feature-selection method for the Support Vector Machine (SVM) learning. Like most feature-selection methods, the proposed method ranks all features in decreasing order of importance so that more relevant features can be identified. It uses a(More)
In this paper, we propose a unified non-quadratic loss function for regression known as soft insensitive loss function (SILF). SILF is a flexible model and possesses most of the desirable characteristics of popular non-quadratic loss functions, such as Laplacian, Huber's and Vapnik's ε-insensitive loss function. We describe the properties of SILF and(More)