#### Filter Results:

- Full text PDF available (327)

#### Publication Year

1975

2017

- This year (11)
- Last 5 years (126)
- Last 10 years (209)

#### Publication Type

#### Co-author

#### Publication Venue

#### Brain Region

#### Cell Type

#### Data Set Used

#### Key Phrases

#### Method

#### Organism

Learn More

- Xiaojin Zhu, Zoubin Ghahramani, John D. Lafferty
- ICML
- 2003

An approach to semi-supervised learning is proposed that is based on a Gaussian random field model. Labeled and unlabeled data are represented as vertices in a weighted graph, with edge weights encoding the similarity between instances. The learning problem is then formulated in terms of a Gaussian random field on this graph, where the mean of the field is… (More)

- Zoubin Ghahramani, Michael I. Jordan
- Machine Learning
- 1995

Hidden Markov models (HMMs) have proven to be one of the most widely used tools for learning probabilistic models of time series data. In an HMM, information about the past is conveyed through a single discrete variable—the hidden state. We discuss a generalization of HMMs in which this state is factored into multiple state variables and is therefore… (More)

- Thomas L. Griffiths, Zoubin Ghahramani
- NIPS
- 2005

We define a probability distribution over equivalence classes of binary matrices with a finite number of rows and an unbounded number of columns. This distribution is suitable for use as a prior in probabilistic models that represent objects using a potentially infinite array of features. We identify a simple generative process that results in the same… (More)

- Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, Lawrence K. Saul
- Machine Learning
- 1999

This paper presents a tutorial introduction to the use of variational methods for inference and learning in graphical models (Bayesian networks and Markov random fields). We present a number of examples of graphical models, including the QMR-DT database, the sigmoid belief network, the Boltzmann machine, and several variants of hidden Markov models, in… (More)

- Edward Snelson, Zoubin Ghahramani
- NIPS
- 2005

We present a new Gaussian process (GP) regression model whose co-variance is parameterized by the the locations of M pseudo-input points, which we learn by a gradient based optimization. We take M N , where N is the number of real data points, and hence obtain a sparse regression method which has O(M 2 N) training cost and O(M 2) prediction cost per test… (More)

- D M Wolpert, Z Ghahramani, M I Jordan
- Science
- 1995

On the basis of computational studies it has been proposed that the central nervous system internally simulates the dynamic behavior of the motor system in planning, control, and learning; the existence and use of such an internal model is still under debate. A sensorimotor integration task was investigated in which participants estimated the location of… (More)

- Jure Leskovec, Deepayan Chakrabarti, Jon M. Kleinberg, Christos Faloutsos, Zoubin Ghahramani
- Journal of Machine Learning Research
- 2010

How can we generate realistic networks? In addition, how can we do so with a mathematically tractable model that allows for rigorous analysis of network properties? Real networks exhibit a long list of surprising properties: Heavy tails for the in-and out-degree distribution, heavy tails for the eigenvalues and eigenvectors, small diameters, and… (More)

- David A. Cohn, Zoubin Ghahramani, Michael I. Jordan
- NIPS
- 1994

For many types of learners one can compute the statistically \optimal" way to select data. We review how these techniques have been used with feedforward neural networks MacKay, 1992; Cohn, 1994]. We then show how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally… (More)

We investigate the use of unlabeled data to help labeled data in classification. We propose a simple iterative algorithm, label propagation, to propagate labels through the dataset along high density areas defined by unlabeled data. We analyze the algorithm, show its solution, and its connection to several other algorithms. We also show how to learn… (More)

This thesis is a detailed investigation into the following question: how much data must an agent collect in order to perform " reinforcement learning " successfully? This question is analogous to the classical issue of the sample complexity in supervised learning, but is harder because of the increased realism of the reinforcement learning setting. This… (More)