Learn More
We describe an approach to domain adaptation that is appropriate exactly in the case when one has enough " target " data to do slightly better than just using only " source " data. Our approach is incredibly simple, easy to implement as a preprocessing step (10 lines of Perl!) and outperforms state-of-the-art approaches on a range of datasets. Moreover, it(More)
This paper presents a general multi-view feature extraction approach that we call Generalized Multiview Analysis or GMA. GMA has all the desirable properties required for cross-view classification and retrieval: it is supervised, it allows generalization to unseen classes, it is multi-view and kernelizable, it affords an efficient eigenvalue based solution(More)
We present SEARN, an algorithm for integrating SEARch and lEARNing to solve complex structured prediction problems such as those that occur in natural language, speech, computational biology, and vision. SEARN is a meta-algorithm that transforms these complex problems into simple classification problems to which any binary classifier may be applied. Unlike(More)
Mappings to structured output spaces (strings, trees, partitions, etc.) are typically learned using extensions of classification algorithms to simple graphical structures (eg., linear chains) in which search and parameter estimation can be performed exactly. Unfortunately, in many complex problems, it is rare that exact search or parameter estimation is(More)
Acknowledgments As I write these words I am overwhelmed that so many people have provided such a continuous force of encouragement, advice, and unrelenting positivity. Truly, I have been blessed to have them in my life. My advisor, Kevin Knight, was just about the perfect person to guide me along this path. He was ever tolerant of my irreverent, frequently(More)
The most basic assumption used in statistical learning theory is that training data and test data are drawn from the same underlying distribution. Unfortunately, in many applications, the " in-domain " test data is drawn from a distribution that is related, but not identical, to the " out-of-domain " distribution of the training data. We consider the common(More)
In the paradigm of multi-task learning, multiple related prediction tasks are learned jointly, sharing information across the tasks. We propose a framework for multi-task learning that enables one to selectively share the information across the tasks. We assume that each task parameter vector is a linear combination of a finite number of underlying basis(More)