Learn More
MOTIVATION One problem with discriminant analysis of DNA microarray data is that each sample is represented by quite a large number of genes, and many of them are irrelevant, insignificant or redundant to the discriminant problem at hand. Methods for selecting important genes are, therefore, of much significance in microarray data analysis. In the present(More)
For classification applications, the role of hidden layer neurons of a radial basis function (RBF) neural network can be interpreted as a function which maps input patterns from a nonlinear separable space to a linear separable space. In the new space, the responses of the hidden layer neurons form new feature vectors. The discriminative power is then(More)
Neural networks with threshold activation functions are highly desirable because of the ease of hardware implementation. However, the popular gradient-based learning algorithms cannot be directly used to train these networks as the threshold functions are nondifferentiable. Methods available in the literature mainly focus on approximating the threshold(More)
  • Kezhi Mao
  • IEEE Trans. Systems, Man, and Cybernetics, Part B
  • 2004
In many pattern classification applications, data are represented by high dimensional feature vectors, which induce high computational cost and reduce classification speed in the context of support vector machines (SVMs). To reduce the dimensionality of pattern representation, we develop a discriminative function pruning analysis (DFPA) feature subset(More)
Feature selection often aims to select a compact feature subset to build a pattern classifier with reduced complexity, so as to achieve improved classification performance. From the perspective of pattern analysis, producing stable or robust solution is also a desired property of a feature selection algorithm. However, the issue of robustness is often(More)
Support vector machines (SVMs) have been extensively used. However, it is known that SVMs face difficulty in solving large complex problems due to the intensive computation involved in their training algorithms, which are at least quadratic with respect to the number of training examples. This paper proposes a new, simple, and efficient network architecture(More)
Mahalanobis class separability measure provides an effective evaluation of the discriminative power of a feature subset, and is widely used in feature selection. However, this measure is computationally intensive or even prohibitive when it is applied to gene expression data. In this study, a recursive approach to Mahalanobis measure evaluation is proposed,(More)
Feature selection is an important issue in pattern classification. In the presented study, we develop a fast orthogonal forward selection (FOFS) algorithm for feature subset selection. The FOFS algorithm employs an orthogonal transform to decompose correlations among candidate features, but it performs the orthogonal decomposition in an implicit way.(More)
  • Kezhi Mao
  • IEEE transactions on systems, man, and…
  • 2004
Sequential forward selection (SFS) and sequential backward elimination (SBE) are two commonly used search methods in feature subset selection. In the present study, we derive an orthogonal forward selection (OFS) and an orthogonal backward elimination (OBE) algorithms for feature subset selection by incorporating Gram–Schmidt and Givens orthogonal(More)