Learn More
Dimensionality reduction is usually involved in the domains of artificial intelligence and machine learning. Linear projection of features is of particular interest for dimensionality reduction since it is simple to calculate and analytically analyze. In this paper, we propose an essentially linear projection technique, called locality-preserved maximum(More)
In this paper, we propose a new approach, called local and weighted maximum margin discriminant analysis (LWMMDA), to performing object discrimination. LWMMDA is a subspace learning method that identifies the underlying nonlinear manifold for discrimination. The goal of LWMMDA is to seek a transformation such that data points of different classes are(More)
This paper concerns a greedy EM algorithm for t-mixture modeling, which is more robust than Gaussian mixture modeling when a typical points exist or the set of data has heavy tail. Local Kullback divergence is used to determine how to insert new component. The greedy algorithm obviates the complicated initialization. The results are comparable to that of(More)
Lasso-type variable selection has increasingly expanded its machine learning applications. In this paper, un-correlated Lasso is proposed for variable selection, where variable de-correlation is considered simultaneously with variable selection, so that selected variables are uncorrelated as much as possible. An effective iterative algorithm, with the proof(More)
Heteroscedastic discriminant analysis (HDA) with two-dimensional (2D) constraints is proposed in this paper. HDA suffers from the small sample size problem and instability when lack of training data or feature dimension is high, even when the number of dimension is in a suitable range. Two-dimensional HDA is first proposed, then we show that 2D methods are(More)
Without constructing adjacency graph for neighborhood, we propose a method to learn similarity among sample points of manifold in Laplacian embedding (LE) based on adding constraints of linear reconstruction and least absolute shrinkage and selection operator type minimization. Two algorithms and corresponding analyses are presented to learn similarity for(More)