Learn More
Various prototype reduction schemes have been reported in the literature. Foremost among these are the prototypes for nearest neighbor (PNN), the vector quantization (VQ), and the support vector machines (SVM) methods. In this paper, we shall show that these schemes can be enhanced by the introduction of a post-processing phase that is related, but not(More)
Various Prototype Reduction Schemes (PRS) have been reported in the literature. Based on their operating characteristics, these schemes fall into two fairly distinct categories — those which are of a creative sort, and those which are essentially selective. The norms for evaluating these methods are typically, the reduction rate and the classification(More)
In Kernel-based Nonlinear Subspace (KNS) methods, the subspace dimensions have a strong influence on the performance of the subspace classifier. In order to get a high classification accuracy, a large dimension is generally required. However, if the chosen subspace dimension is too large, it leads to a low performance due to the overlapping of the resultant(More)
Fisher's Linear Discriminant Analysis (LDA) is a traditional dimensionality reduction method that has been proven to be successful for decades. To enhance the LDA's power for high-dimensional pattern classification, such as face recognition, numerous LDA-extension approaches have been proposed in the literature. This paper proposes a new method that(More)
Most of the prototype reduction schemes (PRS), which have been reported in the literature, process the data in its entirety to yield a subset of prototypes that are useful in nearest-neighbor-like classification. Foremost among these are the prototypes for nearest neighbor classifiers, the vector quantization technique, and the support vector machines.(More)
In Kernel-based Nonlinear Subspace (KNS) methods, the length of the projections onto the principal component directions in the feature space, is computed using a kernel matrix, K, whose dimension is equivalent to the number of sample data points. Clearly this is problematic, especially, for large data sets. In this paper, we solve this problem by(More)
The aim of this paper is to present a strategy by which a new philosophy for pattern classification, namely that pertaining to dissimilarity-based classifiers (DBCs), can be efficiently implemented. This methodology, proposed by Duin and his co-authors (see Refs. [Experiments with a featureless approach to pattern recognition, Prototype selection for(More)
The subspace method of pattern recognition is a classification technique in which pattern classes are specified in terms of linear subspaces spanned by their respective class-based basis vectors. To overcome the limitations of the linear methods, Kernel based Nonlinear Subspace (KNS) methods have been recently proposed in the literature. In KNS, the kernel(More)
The aim of this paper is to present a dissimilarity measure strategy by which a new philosophy for pattern classification pertaining to dissimilarity-based classifications (DBCs) can be efficiently implemented. In DBCs, classifiers are not based on the feature measurements of individual patterns, but rather on a suitable dissimilarity measure among the(More)