Steven De Bruyne

Learn More
We investigate the effects of dimensionality reduction using different techniques and different dimensions on six two-class data sets with numerical attributes as pre-processing for two classification algorithms. Besides reducing the dimensionality with the use of principal components and linear discriminants, we also introduce four new techniques. After(More)
We consider the linear classification method consisting of separating two sets of points in d-space by a hyperplane. We wish to determine the hyperplane which minimises the sum of distances from all misclassified points to the hyperplane. To this end two local descent methods are developed, one grid-based and one optimisation-theory based, and are embedded(More)
In [3] it has been demonstrated that decision trees built in a feature space yielded by some eigen transformation can be competitive with industry standards. Unfortunately, the selection of such a transformation and the dimension of the feature space that should be retained is not self-evident. These trees however have interesting properties that can be(More)
  • 1