• Publications
  • Influence
METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments
TLDR
METEOR is described, an automatic metric for machine translation evaluation that is based on a generalized concept of unigram matching between the machineproduced translation and human-produced reference translations and can be easily extended to include more advanced matching strategies. Expand
Meteor Universal: Language Specific Translation Evaluation for Any Target Language
TLDR
Meteor Universal brings language specific evaluation to previously unsupported target languages by automatically extracting linguistic resources from the bitext used to train MT systems and using a universal parameter set learned from pooling human judgments of translation quality from several language directions. Expand
METEOR: An Automatic Metric for MT Evaluation with High Levels of Correlation with Human Judgments
TLDR
The technical details underlying the Meteor metric are recapped, the latest release includes improved metric parameters and extends the metric to support evaluation of MT output in Spanish, French and German, in addition to English. Expand
Meteor 1.3: Automatic Metric for Reliable Optimization and Evaluation of Machine Translation Systems
This paper describes Meteor 1.3, our submission to the 2011 EMNLP Workshop on Statistical Machine Translation automatic evaluation metric tasks. New metric features include improved textExpand
Better Hypothesis Testing for Statistical Machine Translation: Controlling for Optimizer Instability
TLDR
This paper provides a systematic analysis of the effects of optimizer instability---an extraneous variable that is seldom controlled for---on experimental outcomes, and makes recommendations for reporting results more accurately. Expand
The Meteor metric for automatic evaluation of machine translation
The Meteor Automatic Metric for Machine Translation evaluation, originally developed and released in 2004, was designed with the explicit goal of producing sentence-level scores which correlate wellExpand
Humor Recognition and Humor Anchor Extraction
TLDR
This work identifies several semantic structures behind humor and design sets of features for each structure, and employs a computational approach to recognize humor, and develops a simple and effective method to extract anchors that enable humor in a sentence. Expand
Parser Combination by Reparsing
TLDR
A novel parser combination scheme that works by reparsing input sentences once they have already been parsed by several different parsers is presented, generating results that surpass state-of-the-art accuracy levels for individual parsers. Expand
A Classifier-Based Parser with Linear Run-Time Complexity
TLDR
It is shown that, with an appropriate feature set used in classification, a very simple one-path greedy parser can perform at the same level of accuracy as more complex parsers. Expand
Meteor, M-BLEU and M-TER: Evaluation Metrics for High-Correlation with Human Rankings of Machine Translation Output
This paper describes our submissions to the machine translation evaluation shared task in ACL WMT-08. Our primary submission is the Meteor metric tuned for optimizing correlation with human rankingsExpand
...
1
2
3
4
5
...