#### Filter Results:

#### Publication Year

1975

2017

#### Publication Type

#### Co-author

#### Publication Venue

#### Data Set Used

#### Key Phrases

Learn More

- Xiaojin Zhu, Zoubin Ghahramani, John D. Lafferty
- ICML
- 2003

An approach to semi-supervised learning is proposed that is based on a Gaussian random field model. Labeled and unlabeled data are represented as vertices in a weighted graph, with edge weights encoding the similarity between instances. The learning problem is then formulated in terms of a Gaussian random field on this graph, where the mean of the field is… (More)

- Zoubin Ghahramani, Michael I. Jordan
- NIPS
- 1995

We present a framework for learning in hidden Markov models with distributed state representations. Within this framework , we derive a learning algorithm based on the Expectation-Maximization (EM) procedure for maximum likelihood estimation. Analogous to the standard Baum-Welch update rules, the M-step of our algorithm is exact and can be solved… (More)

- Thomas L. Griffiths, Zoubin Ghahramani
- NIPS
- 2005

We define a probability distribution over equivalence classes of binary matrices with a finite number of rows and an unbounded number of columns. This distribution is suitable for use as a prior in probabilistic models that represent objects using a potentially infinite array of features. We identify a simple generative process that results in the same… (More)

- Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, Lawrence K. Saul
- Machine Learning
- 1999

This paper presents a tutorial introduction to the use of variational methods for inference and learning in graphical models (Bayesian networks and Markov random fields). We present a number of examples of graphical models, including the QMR-DT database, the sigmoid belief network, the Boltzmann machine, and several variants of hidden Markov models, in… (More)

- Edward Snelson, Zoubin Ghahramani
- NIPS
- 2005

We present a new Gaussian process (GP) regression model whose co-variance is parameterized by the the locations of M pseudo-input points, which we learn by a gradient based optimization. We take M N , where N is the number of real data points, and hence obtain a sparse regression method which has O(M 2 N) training cost and O(M 2) prediction cost per test… (More)

We investigate the use of unlabeled data to help labeled data in classification. We propose a simple iterative algorithm, label propagation, to propagate labels through the dataset along high density areas defined by unlabeled data. We analyze the algorithm, show its solution, and its connection to several other algorithms. We also show how to learn… (More)

- David A. Cohn, Zoubin Ghahramani, Michael I. Jordan
- NIPS
- 1994

For many types of learners one can compute the statistically \optimal" way to select data. We review how these techniques have been used with feedforward neural networks MacKay, 1992; Cohn, 1994]. We then show how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally… (More)

This thesis is a detailed investigation into the following question: how much data must an agent collect in order to perform " reinforcement learning " successfully? This question is analogous to the classical issue of the sample complexity in supervised learning, but is harder because of the increased realism of the reinforcement learning setting. This… (More)

- Jure Leskovec, Deepayan Chakrabarti, Jon M. Kleinberg, Christos Faloutsos, Zoubin Ghahramani
- Journal of Machine Learning Research
- 2010

How can we generate realistic networks? In addition, how can we do so with a mathematically tractable model that allows for rigorous analysis of network properties? Real networks exhibit a long list of surprising properties: Heavy tails for the in-and out-degree distribution, heavy tails for the eigenvalues and eigenvectors, small diameters, and… (More)

Factor analysis, a statistical method for modeling the covariance structure of high dimensional data using a small number of latent v ariables, can be extended by allowing diierent local factor models in diierent regions of the input space. This results in a model which concurrently performs clustering and dimensionality reduction, and can be thought of as… (More)