Think Globally, Fit Locally : Unsupervised Learning of Low Dimensional Manifolds


The problem of dimensionality reduction arises in many fields of information processing, including machine learning, data compression, scientific visualization, pattern recognition, and neural computation. Here we describe locally linear embedding (LLE), an unsupervised learning algorithm that computes low dimensional, neighborhood preserving embeddings of high dimensional data. The data, assumed to be sampled from an underlying manifold, are mapped into a single global coordinate system of lower dimensionality. The mapping is derived from the symmetries of locally linear reconstructions, and the actual computation of the embedding reduces to a sparse eigen-value problem. Notably, the optimizations in LLE—though capable of generating highly nonlinear embeddings—are simple to implement, and they do not involve local minima. In this paper, we describe the implementation of the algorithm in detail and discuss several extensions that enhance its performance. We present results of the algorithm applied to data sampled from known manifolds, as well as to collections of images of faces, lips, and handwritten digits. These examples are used to provide extensive illustrations of the algorithm's performance—both successes and failures—and to relate the algorithm to previous and ongoing work in nonlinear dimensionality reduction.

17 Figures and Tables

Citations per Year

1,322 Citations

Semantic Scholar estimates that this publication has received between 1,110 and 1,568 citations based on the available data.

See our FAQ for additional information.