#### Filter Results:

- Full text PDF available (81)

#### Publication Year

1996

2016

- This year (0)
- Last 5 years (3)
- Last 10 years (21)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

#### Method

#### Organism

Learn More

- Sam T. Roweis, Lawrence K. Saul
- Science
- 2000

Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes… (More)

- Rob Fergus, Barun Singh, Aaron Hertzmann, Sam T. Roweis, William T. Freeman
- ACM Trans. Graph.
- 2006

Camera shake during exposure leads to objectionable image blur and ruins many photographs. Conventional blind deconvolution methods typically assume frequency-domain constraints on images, or overly simplified parametric forms for the motion path during camera shake. Real camera motions can follow convoluted paths, and a spatial domain prior can better… (More)

In this paper we propose a novel method for learning a Mahalanobis distance measure to be used in the KNN classification algorithm. The algorithm directly maximizes a stochastic variant of the leave-one-out KNN score on the training set. It can also learn a low-dimensional linear embedding of labeled data that can be used for data visualization and fast… (More)

The problem of dimensionality reduction arises in many fields of information processing, including machine learning, data compression, scientific visualization, pattern recognition, and neural computation. Here we describe locally linear embedding (LLE), an unsupervised learning algorithm that computes low dimensional, neighborhood preserving embeddings of… (More)

- Geoffrey E. Hinton, Sam T. Roweis
- NIPS
- 2002

We describe a probabilistic approach to the task of placing objects, described by high-dimensional vectors or by pairwise dissimilarities, in a low-dimensional space in a way that preserves neighbor identities. A Gaussian is centered on each object in the high-dimensional space and the densities under this Gaussian (or the given dissimilarities) are used to… (More)

- Amir Globerson, Sam T. Roweis
- NIPS
- 2005

We present an algorithm for learning a quadratic Gaussian metric (Mahalanobis distance) for use in classification tasks. Our method relies on the simple geometric intuition that a good metric is one under which points in the same class are simultaneously near each other and far from points in the other classes. We construct a convex optimization problem… (More)

- Sam T. Roweis
- NIPS
- 1997

I present an expectation-maximization (EM) algorithm for principal component analysis (PCA). The algorithm allows a few eigenvectors and eigenvalues to be extracted from large collections of high dimensional data. It is computationally very efficient in space and time. It also naturally accommodates missing infonnation. I also introduce a new variant of PC… (More)

- Erik Winfree, John N. Abelson, +8 authors Laura Rodríguez
- 1998

How can molecules compute? In his early studies of reversible computation, Bennett imagined an enzymatic Turing Machine which modified a hetero-polymer (such as DNA) to perform computation with asymptotically low energy expenditures. Adleman’s recent experimental demonstration of a DNA computation, using an entirely different approach, has led to a wealth… (More)

- Sam T. Roweis, Zoubin J. C. Ghahramani
- Neural Computation
- 1999

Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and… (More)

This thesis is a detailed investigation into the following question: how much data must an agent collect in order to perform “reinforcement learning” successfully? This question is analogous to the classical issue of the sample complexity in supervised learning, but is harder because of the increased realism of the reinforcement learning setting. This… (More)