#### Filter Results:

#### Publication Year

2002

2016

#### Co-author

#### Key Phrase

#### Publication Venue

#### Data Set Used

Learn More

We describe an approach to domain adaptation that is appropriate exactly in the case when one has enough " target " data to do slightly better than just using only " source " data. Our approach is incredibly simple, easy to implement as a preprocessing step (10 lines of Perl!) and outperforms state-of-the-art approaches on a range of datasets. Moreover, it… (More)

This paper presents a general multi-view feature extraction approach that we call Generalized Multiview Analysis or GMA. GMA has all the desirable properties required for cross-view classification and retrieval: it is supervised, it allows generalization to unseen classes, it is multi-view and kernelizable, it affords an efficient eigenvalue based solution… (More)

We present SEARN, an algorithm for integrating SEARch and lEARNing to solve complex structured prediction problems such as those that occur in natural language, speech, computational biology, and vision. SEARN is a meta-algorithm that transforms these complex problems into simple classification problems to which any binary classifier may be applied. Unlike… (More)

We introduce a new Bayesian model for hierarchical clustering based on a prior over trees called Kingman's coalescent. We develop novel greedy and sequential Monte Carlo inferences which operate in a bottom-up agglomerative fashion. We show experimentally the superiority of our algorithms over the state-of-the-art, and demonstrate our approach in document… (More)

We propose a spectral clustering algorithm for the multi-view setting where we have access to multiple views of the data, each of which can be independently used for clustering. Our spectral clustering algorithm has a flavor of co-training, which is already a widely used idea in semi-supervised learning. We work on the assumption that the true underlying… (More)

Mappings to structured output spaces (strings, trees, partitions, etc.) are typically learned using extensions of classification algorithms to simple graphical structures (eg., linear chains) in which search and parameter estimation can be performed exactly. Unfortunately, in many complex problems, it is rare that exact search or parameter estimation is… (More)

In many clustering problems, we have access to multiple views of the data each of which could be individually used for clustering. Exploiting information from multiple views, one can hope to find a clustering that is more accurate than the ones obtained using the individual views. Often these different views admit same underlying clustering of the data, so… (More)

- Jonathan David, Louis May, Erika Barragan-Nunez, Rahul Bhagat, Gully Burns, Hal Daumé +36 others
- 2010

Acknowledgments As I write these words I am overwhelmed that so many people have provided such a continuous force of encouragement, advice, and unrelenting positivity. Truly, I have been blessed to have them in my life. My advisor, Kevin Knight, was just about the perfect person to guide me along this path. He was ever tolerant of my irreverent, frequently… (More)

The most basic assumption used in statistical learning theory is that training data and test data are drawn from the same underlying distribution. Unfortunately, in many applications, the " in-domain " test data is drawn from a distribution that is related, but not identical, to the " out-of-domain " distribution of the training data. We consider the common… (More)

In the paradigm of multi-task learning, multiple related prediction tasks are learned jointly, sharing information across the tasks. We propose a framework for multi-task learning that enables one to selectively share the information across the tasks. We assume that each task parameter vector is a linear combination of a finite number of underlying basis… (More)