# Michael I. Jordan

Author pages are created from data sourced from our academic publisher partnerships and public sources.

- Publications
- Influence

Latent Dirichlet Allocation

- D. Blei, A. Ng, Michael I. Jordan
- Computer Science
- J. Mach. Learn. Res.
- 3 January 2001

We propose a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams [6], and… Expand

On Spectral Clustering: Analysis and an algorithm

- A. Ng, Michael I. Jordan, Yair Weiss
- Computer Science
- NIPS
- 3 January 2001

Despite many empirical successes of spectral clustering methods— algorithms that cluster points using eigenvectors of matrices derived from the data—there are several unresolved issues. First. there… Expand

Hierarchical Dirichlet Processes

- Y. Teh, Michael I. Jordan, M. Beal, D. Blei
- Mathematics
- 1 December 2006

We consider problems involving groups of data where each observation within a group is a draw from a mixture model and where it is desirable to share mixture components between groups. We assume that… Expand

Trust Region Policy Optimization

- John Schulman, S. Levine, P. Abbeel, Michael I. Jordan, P. Moritz
- Computer Science, Mathematics
- ICML
- 19 February 2015

In this article, we describe a method for optimizing control policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified scheme, we develop a… Expand

Graphical Models

Statistical applications in fields such as bioinformatics, information retrieval, speech processing, image processing and communications often involve large-scale models in which thousands or… Expand

Learning Transferable Features with Deep Adaptation Networks

- Mingsheng Long, Y. Cao, J. Wang, Michael I. Jordan
- Computer Science, Mathematics
- ICML
- 9 February 2015

Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from… Expand

Learning the Kernel Matrix with Semidefinite Programming

- G. Lanckriet, N. Cristianini, P. Bartlett, L. Ghaoui, Michael I. Jordan
- Mathematics, Computer Science
- J. Mach. Learn. Res.
- 8 July 2002

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by… Expand

Distance Metric Learning with Application to Clustering with Side-Information

- E. Xing, A. Ng, Michael I. Jordan, S. Russell
- Computer Science
- NIPS
- 2002

Many algorithms rely critically on being given a good metric over their inputs. For instance, data can often be clustered in many "plausible" ways, and if a clustering algorithm such as K-means… Expand

High-Dimensional Continuous Control Using Generalized Advantage Estimation

- John Schulman, P. Moritz, S. Levine, Michael I. Jordan, P. Abbeel
- Computer Science, Mathematics
- ICLR
- 8 June 2015

Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function… Expand

Optimal feedback control as a theory of motor coordination

- E. Todorov, Michael I. Jordan
- Biology, Medicine
- Nature Neuroscience
- 1 November 2002

A central problem in motor control is understanding how the many biomechanical degrees of freedom are coordinated to achieve a common goal. An especially puzzling aspect of coordination is that… Expand