Learn More
We present an approach to grammatical error correction for the CoNLL 2013 shared task based on a weighted tree-to-string transducer. Rules for the transducer are extracted from the NUCLE training data. An n-gram language model is used to rerank k-best sentence lists generated by the transducer. Our system obtains a precision , recall and F1 score of 0.27,(More)
Models of protein evolution currently come in two flavors: generalist and specialist. Generalist models (e.g. PAM, JTT, WAG) adopt a one-size-fits-all approach, where a single model is estimated from a number of different protein alignments. Specialist models (e.g. mtREV, rtREV, HIVbetween) can be estimated when a large quantity of data are available for a(More)
We propose a simple, scalable, fully generative model for transition-based dependency parsing with high accuracy. The model, parameterized by Hierarchical Pitman-Yor Processes, overcomes the limitations of previous generative models by allowing fast and accurate inference. We propose an efficient decoding algorithm based on particle filtering that can adapt(More)
We present an end-to-end neural encoderdecoder AMR parser that extends an attention-based model by predicting the alignment between graph nodes and sentence tokens explicitly with a pointer mechanism. Candidate lemmas are predicted as a pre-processing step so that the lemmas of lexical concepts, as well as constant strings, are factored out of the graph(More)
Foreword The 2014 Department of Computer Science Student Conference was held on the 13th June in the department. This year we had a very decent number of submissions with 19 abstracts and 9 posters submitted. What is particularly encouraging, the 12 abstracts that were accepted represented research from across the departments research themes. The conference(More)