- Published 2008 in Math. Oper. Res.

One recently proposed criterion to separate two datasets in discriminant analysis, is to use a hyperplane which minimises the sum of distances to it from all the misclassified data points. Here all distances are supposed to be measured by way of some fixed norm, while misclassification means lying on the wrong side of the hyperplane, or rather in the wrong halfspace. In this paper we study the problem of determining such an optimal halfspace when points are distributed according to an arbitrary random vector X in Rd ,. In the unconstrained case in dimension d, we prove that any optimal separating halfspace always balances the misclassified points. Moreover, under polyhedrality assumptions on the support of X, there always exists an optimal separating halfspace passing through d affinely independent points. It follows that the problem is polynomially solvable in fixed dimension by an algorithm of O(nd+1) when the support of X consists of n points. All these results are strengthened in the one-dimensional case, yielding an algorithm with complexity linear in the cardinality of the support of X. If a different norm is used for each data set in order to measure distances to the hyperplane, or if all distances are measured by a ∗Facultad de Matemáticas, Universidad de Sevilla, Tarfia s/n, 41012 Sevilla, Spain, e-mail: ecarrizosa@us.es †MOSI Department of Mathematics, Operational research, Statistics and Information systems for management, Vrije Universiteit Brussel, Pleinlaan 2, B 1050 Brussels, Belgium, e-mail: Frank.Plastria@vub.ac.be

@article{Carrizosa2008OptimalES,
title={Optimal Expected-Distance Separating Halfspace},
author={Emilio Carrizosa and Frank Plastria},
journal={Math. Oper. Res.},
year={2008},
volume={33},
pages={662-677}
}