Share This Author
Word Representations: A Simple and General Method for Semi-Supervised Learning
This work evaluates Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeds of words on both NER and chunking, and finds that each of the three word representations improves the accuracy of these baselines.
Theano: A Python framework for fast computation of mathematical expressions
The performance of Theano is compared against Torch7 and TensorFlow on several machine learning models and recently-introduced functionalities and improvements are discussed.
Theano: A CPU and GPU Math Compiler in Python
This paper illustrates how to use Theano, outlines the scope of the compiler, provides benchmarks on both CPU and GPU processors, and explains its overall design.
Evaluation of machine translation and its evaluation
The unigram-based F-measure has significantly higher correlation with human judgments than recently proposed alternatives and has an intuitive graphical interpretation, which can facilitate insight into how MT systems might be improved.
Precision and Recall of Machine Translation
Machine translation can be evaluated using precision, recall, and the F-measure, which have significantly higher correlation with human judgments than recently proposed alternatives.
Experience Grounds Language
It is posited that the present success of representation learning approaches trained on large text corpora can be deeply enriched from the parallel tradition of research on the contextual and social nature of language.
Accurate Dependency Parsing with a Stacked Multilayer Perceptron
Recent improvements to the DeSR parser are described, in particular stacked parsing, exploiting a beam search strategy and using a Multilayer Perceptron classifier.
Advances in Discriminative Parsing
This work has no generative component, yet surpasses a generative baseline on constituent parsing, and does so with minimal linguistic cleverness, and performs feature selection incrementally over an exponential feature space during training.
Scalable Purely-Discriminative Training for Word and Tree Transducers
- Benjamin Wellington, Joseph P. Turian, Christopher R. Pike, Daniel R. Melamed
- Computer ScienceAMTA
The present study makes progress towards a syntax-aware MT system whose every component is trained discriminatively, an approach to discriminative learning that is computationally efficient enough for large statistical MT systems, yet whose accuracy on translation sub-tasks is near the state of the art.
A preliminary evaluation of word representations for named-entity recognition
- Joseph P. Turian
- Computer Science
This work evaluates Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009)embeddings of words for named-entity recognition with a linear model, finding that all three representations improve accuracy on NER.