Michael Collins

Learn More
We introduce a globally normalized transition-based neural network model that achieves state-of-the-art part-ofspeech tagging, dependency parsing and sentence compression results. Our model is a simple feed-forward neural network that operates on a task-specific transition system, yet achieves comparable or better accuracies than recurrent models. The key(More)
Lactobacillus plantarum ctsR was characterized. ctsR was found to be cotranscribed with clpC and induced in response to various abiotic stresses. ctsR deletion conferred a heat-sensitive phenotype with peculiar cell morphological features. The transcriptional pattern of putative CtsR regulon genes was examined in the Delta ctsR mutant. Direct CtsR-dependent(More)
This paper addresses the problem of mapping natural language sentences to lambda–calculus encodings of their meaning. We describe a learning algorithm that takes as input a training set of sentences labeled with expressions in the lambda calculus. The algorithm induces a grammar for the problem, along with a log-linear model that represents a distribution(More)
HEAD DRIVEN STATISTICAL MODELS FOR NATURAL LANGUAGE PARSING Michael Collins Supervisor Professor Mitch Marcus Statistical models for parsing natural language have recently shown considerable suc cess in broad coverage domains Ambiguity often leads to an input sentence having many possible parse trees statistical approaches assign a probability to each tree(More)
We present a discriminative latent variable model for classification problems in structured domains where inputs can be represented by a graph of local observations. A hidden-state conditional random field framework learns a set of latent variables conditioned on local features. Observations need not be independent and may overlap in space and time.
We present structured perceptron training for neural network transition-based dependency parsing. We learn the neural network representation using a gold corpus augmented by a large number of automatically parsed sentences. Given this fixed network representation, we learn a final layer using the structured perceptron with beam-search decoding. On the Penn(More)
We present a simple and effective semisupervised method for training dependency parsers. We focus on the problem of lexical representation, introducing features that incorporate word clusters derived from a large unannotated corpus. We demonstrate the effectiveness of the approach in a series of dependency parsing experiments on the Penn Treebank and Prague(More)
We describe the application of kernel methods to Natural Language Processing (NLP) problems. In many NLP tasks the objects being modeled are strings, trees, graphs or other discrete structures which require some mechanism to convert them into feature vectors. We describe kernels for various natural language structures, allowing rich, high dimensional(More)