Learn More
Dimensionality reduction is usually involved in the domains of artificial intelligence and machine learning. Linear projection of features is of particular interest for dimensionality reduction since it is simple to calculate and analytically analyze. In this paper, we propose an essentially linear projection technique, called locality-preserved maximum(More)
Fisher linear discriminant analysis (LDA) is a classical subspace learning technique of extracting discriminative features for pattern recognition problems. The formulation of the Fisher criterion is based on the L2-norm, which makes LDA prone to being affected by the presence of outliers. In this paper, we propose a new method, termed LDA-L1, by maximizing(More)
In this paper, we propose a new approach, called local and weighted maximum margin discriminant analysis (LWMMDA), to performing object discrimination. LWMMDA is a subspace learning method that identifies the underlying nonlinear manifold for discrimination. The goal of LWMMDA is to seek a transformation such that data points of different classes are(More)
This paper formulates a novel expectation maximization (EM) algorithm for the mixture of multivariate t-distributions. By introducing a new kind of “missing” data, we show that the empirically improved iterative algorithm, in literature, for the mixture of multivariate t-distributions is in fact a type of EM algorithm; thus a theoretical analysis is(More)
Recently, two-dimensional principal component analysis (2DPCA) as a novel eigenvector-based method has proved to be an efficient technique for image feature extraction and representation. In this paper, by supposing a parametric Gaussian distribution over the image space (spanned by the row vectors of 2D image matrices) and a spherical Gaussian noise model(More)