Learn More
Multinomial logistic regression provides the standard penalised maximum-likelihood solution to multi-class pattern recognition problems. More recently, the development of sparse multinomial logistic regression models has found application in text processing and microarray classification, where explicit identification of the most informative features is of(More)
While the model parameters of a kernel machine are typically given by the solution of a convex op-timisation problem, with a single global optimum, the selection of good values for the regularisation and kernel parameters is much less straightforward. Fortunately the leave-one-out cross-validation procedure can be performed or a least approximated very(More)
Mika et al. [1] apply the " kernel trick " to obtain a non-linear variant of Fisher's linear discriminant analysis method, demonstrating state-of-the-art performance on a range of benchmark datasets. We show that leave-one-out cross-validation of kernel Fisher discriminant classifiers can be implemented with a computational complexity of only O(ℓ 3)(More)
Model selection strategies for machine learning algorithms typically involve the numerical opti-misation of an appropriate model selection criterion, often based on an estimator of generalisation performance, such as k-fold cross-validation. The error of such an estimator can be broken down into bias and variance components. While unbiasedness is often(More)
Leave-one-out cross-validation has been shown to give an almost unbiased estimator of the generalisation properties of statistical models, and therefore provides a sensible criterion for model selection and comparison. In this paper we show that exact leave-one-out cross-validation of sparse Least-Squares Support Vector Machines (LS-SVMs) can be implemented(More)
MOTIVATION Gene selection algorithms for cancer classification, based on the expression of a small number of biomarker genes, have been the subject of considerable research in recent years. Shevade and Keerthi propose a gene selection algorithm based on sparse logistic regression (SLogReg) incorporating a Laplace prior to promote sparsity in the model(More)
Mika et al. [3] introduce a non-linear formulation of Fisher's linear discriminant, based the now familiar " kernel trick " , demonstrating state-of-the-art performance on a wide range of real-world benchmark datasets. In this paper, we show that the usual regularisation parameter can be adjusted so as to minimise the leave-one-out cross-validation error(More)
Motivation: Gene selection algorithms for cancer classification, based on the expression of a small number of biomarker genes, have been the subject of considerable research in recent years. Shevade and Keerthi (2003) propose a gene selection algorithm based on sparse logistic regression (SLogReg) incorporating a Laplace prior to promote sparsity in the(More)
Kernel logistic regression (KLR) is the kernel learning method best suited to binary pattern recognition problems where estimates of a-posteriori probability of class membership are required. Such problems occur frequently in practical applications, for instance because the operational prior class probabilities or equivalently the relative misclassification(More)
Survival analysis is a branch of statistics concerned with the time elapsing before "failure," with diverse applications in medical statistics and the analysis of the reliability of electrical or mechanical components. We introduce a parametric accelerated life survival analysis model based on kernel learning methods that, at least in principal, is able to(More)