Learn More
In this paper, we use a unified loss function, called the soft insensitive loss function, for Bayesian support vector regression. We follow standard Gaussian processes for regression to set up the Bayesian framework, in which the unified loss function is used in the likelihood evaluation. Under this framework, the maximum a posteriori estimate of the(More)
Sequential minimal optimization (SMO) is one popular algorithm for training support vector machine (SVM), but it still requires a large amount of computation time for solving large size problems. This paper proposes one parallel implementation of SMO for training SVM. The parallel SMO is developed using message passing interface (MPI). Specifically, the(More)
In this paper, we apply popular Bayesian techniques on support vector classifier. We propose a novel differentiable loss function called trigonometric loss function with the desirable characteristics of natural normalization in the likelihood function, and then follow standard Gaussian processes techniques to set up a Bayesian framework. In this framework,(More)
531 Fig. 7. Representation of a deterministic grid. for the constrained optimization problem, which is known to be more efficient than the method using a penalty function. For various types of Stewart platforms, m = 45 is sufficient to guarantee the global maximum from simulations. GDIF and GDIM can be obtained as 1.0 and 1:1547 [m 01 ], respectively. From(More)
This paper describes an improved algorithm for the numerical solution to the support vector machine (SVM) classification problem for all values of the regularization parameter <i>C</i> . The algorithm is motivated by the work of Hastie and follows the main idea of tracking the optimality conditions of the SVM solution for ascending value of <i>C</i> . It(More)
This paper presents a new wrapper-based feature selection method for support vector regression (SVR) using its probabilistic predictions. The method computes the importance of a feature by aggregating the difference, over the feature space, of the conditional density functions of the SVR prediction with and without the feature. As the exact computation of(More)
In this paper, we give an efficient method for computing the leave-one-out (LOO) error for support vector machines (SVMs) with Gaussian kernels quite accurately. It is particularly suitable for iterative decomposition methods of solving SVMs. The importance of various steps of the method is illustrated in detail by showing the performance on six benchmark(More)
Feature selection is an important aspect of solving data-mining and machine-learning problems. This paper proposes a feature-selection method for the Support Vector Machine (SVM) learning. Like most feature-selection methods, the proposed method ranks all features in decreasing order of importance so that more relevant features can be identified. It uses a(More)
This paper introduces the concepts of Dwell-Time invariance (DT-invariance) and maximal constraint admissible DT-invariant set for discrete-time switching systems under dwell-time switching. Main contributions of this paper include a characterization for DT-invariance; a numerical computation of the maximal CADT-invariant set; an algorithm for the(More)