Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions ofâ€¦ (More)

Linear and quadratic discriminant analysis are considered in the small sample high-dimensional setting. Alternatives to the usual maximum likelihood (plug-in) estimates for the covariance matricesâ€¦ (More)

Gradient boosting constructs additive regression models by sequentially tting a simple parameterized function (base learner) to current \pseudo"{residuals by least{squares at each iteration. Theâ€¦ (More)

An algorithm and data structure are presented for searching a file containing N records, each described by k real valued keys, for the m closest matches or nearest neighbors to a given query record.â€¦ (More)

An algorithm for the analysis of multivariate data is presented, and discussed in terms of specific examples. The algorithm seeks to find oneand two-dimensional linear projections of multivariateâ€¦ (More)

In regression analysis the response variable Y and the predictor variables X1,...,Xp are often replaced by functions e(Y) and fl(X )90009fp(Xp). We discuss a procedure for estimating those functionsâ€¦ (More)

Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions ofâ€¦ (More)

The K-nearest-neighbor decision rule assigns an object of unknown class to the plurality class among the K labeled \training" objects that are closest to it. Closeness is usually de Ì„ned in terms ofâ€¦ (More)

Lazy learning algorithms, exemplified by nearestneighbor algorithms, do not induce a concise hypothesis from a given training set; the inductive process is delayed until a test instance is given.â€¦ (More)