Learn More
MOTIVATION One problem with discriminant analysis of DNA microarray data is that each sample is represented by quite a large number of genes, and many of them are irrelevant, insignificant or redundant to the discriminant problem at hand. Methods for selecting important genes are, therefore, of much significance in microarray data analysis. In the present(More)
—Neural networks with threshold activation functions are highly desirable because of the ease of hardware implementation. However, the popular gradient-based learning algorithms cannot be directly used to train these networks as the threshold functions are nondifferentiable. Methods available in the literature mainly focus on approximating the threshold(More)
For classification applications, the role of hidden layer neurons of a radial basis function (RBF) neural network can be interpreted as a function which maps input patterns from a nonlinear separable space to a linear separable space. In the new space, the responses of the hidden layer neurons form new feature vectors. The discriminative power is then(More)
  • Kezhi Mao
  • 2004
In many pattern classification applications, data are represented by high dimensional feature vectors, which induce high computational cost and reduce classification speed in the context of support vector machines (SVMs). To reduce the dimensionality of pattern representation, we develop a discriminative function pruning analysis (DFPA) feature subset(More)
Feature selection is an important issue in pattern classification. In the presented study, we develop a fast orthogonal forward selection (FOFS) algorithm for feature subset selection. The FOFS algorithm employs an orthogonal transform to decompose correlations among candidate features, but it performs the orthogonal decomposition in an implicit way.(More)
Support vector machines (SVMs) have been extensively used. However, it is known that SVMs face difficulty in solving large complex problems due to the intensive computation involved in their training algorithms, which are at least quadratic with respect to the number of training examples. This paper proposes a new, simple, and efficient network architecture(More)
  • Kezhi Mao
  • 2005
Principal components analysis (PCA) is probably the best-known approach to unsupervised dimensionality reduction. However, axes of the lower-dimensional space, ie., principal components (PCs), are a set of new variables carrying no clear physical meanings. Thus, interpretation of results obtained in the lower-dimensional PCA space and data acquisition for(More)
Random Projection (RP) is a popular technique for dimensionality reduction because of its high computational efficiency. However, RP may not yield highly discriminative low-dimensional space to produce best pattern classification performance since the random transformation matrix of RP is independent of data. In this paper, we propose a Semi-Random(More)