Findings of the 2015 Workshop on Statistical Machine Translation

@inproceedings{Bojar2015FindingsOT,
  title={Findings of the 2015 Workshop on Statistical Machine Translation},
  author={Ondrej Bojar and R. Chatterjee and C. Federmann and B. Haddow and M. Huck and Chris Hokamp and Philipp Koehn and V. Logacheva and Christof Monz and Matteo Negri and Matt Post and Carolina Scarton and Lucia Specia and M. Turchi},
  booktitle={WMT@EMNLP},
  year={2015}
}
This paper presents the results of the WMT15 shared tasks, which included a standard news translation task, a metrics task, a tuning task, a task for run-time estimation of machine translation quality, and an automatic post-editing task. This year, 68 machine translation systems from 24 institutions were submitted to the ten translation directions in the standard translation task. An additional 7 anonymized systems were included, and were then evaluated both automatically and manually. The… Expand
Findings of the 2016 Conference on Machine Translation
This paper presents the results of the WMT16 shared tasks, which included five machine translation (MT) tasks (standard news, IT-domain, biomedical, multimodal, pronoun), three evaluation tasksExpand
Results of the WMT15 Tuning Shared Task
This paper presents the results of the WMT15 Tuning Shared Task. We provided the participants of this task with a complete machine translation system and asked them to tune its internal parametersExpand
The UU Submission to the Machine Translation Quality Estimation Task
This paper outlines the UU-SVM system for Task 1 of the WMT16 Shared Task in Quality Estimation. Our system uses Support Vector Machine Regression to investigate the impact of a series of featuresExpand
Ten Years of WMT Evaluation Campaigns: Lessons Learnt
The WMT evaluation campaign (http://www.statmt.org/wmt16) has been run annually since 2006. It is a collection of shared tasks related to machine translation, in which researchers compare theirExpand
The Edinburgh machine translation systems for IWSLT 2015
TLDR
The University of Edinburgh’s machine translation systems for the IWSLT 2015 evaluation campaign are described, based on preliminary systems which are under development for the purpose of lecture translation in the TraMOOC project, funded by the European Union. Expand
Bilingual Embeddings and Word Alignments for Translation Quality Estimation
TLDR
This paper describes the submission UFAL MULTIVEC to the WMT16 Quality Estimation Shared Task, for EnglishGerman sentence-level post-editing effort prediction and ranking, which outperforms the baseline, as well as the winning system in WMT15, Referential Translation Machines (RTM). Expand
Findings of the WMT 2018 Shared Task on Automatic Post-Editing
TLDR
The fourth round of the WMT shared task on MT Automatic Post-Editing, which consists in automatically correcting the output of a “black-box” machine translation system by learning from human corrections, focused on one language pair and on domain-specific data. Expand
FBK HLT-MT Participation in the 1 st Translation Memory Cleaning Shared Task
We present the translation memory cleaning system designed by FBK HLT-MT to participate in the shared task of the NLP4TM 2016 workshop. Our system integrates different feature extraction approachesExpand
A Reading Comprehension Corpus for Machine Translation Evaluation
TLDR
This paper introduces a corpus of reading comprehension tests based on machine translated documents, where documents are evaluated based on answers to questions by fluent speakers of the target language. Expand
Machine Translation Evaluation beyond the Sentence Level
TLDR
Several documentlevel MT evaluation metrics are proposed: generalizations of sentence-level metrics, language- (pair)-independent versions of lexical cohesion scores and coreference and morphology preservation in the target texts. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 114 REFERENCES
Findings of the 2014 Workshop on Statistical Machine Translation
This paper presents the results of the WMT14 shared tasks, which included a standard news translation task, a separate medical translation task, a task for run-time estimation of machine translationExpand
Findings of the 2013 Workshop on Statistical Machine Translation
We present the results of the WMT13 shared tasks, which included a translation task, a task for run-time estimation of machine translation quality, and an unofficial metrics task. This year, 143Expand
Findings of the 2012 Workshop on Statistical Machine Translation
TLDR
A large-scale manual evaluation of 103 machine translation systems submitted by 34 teams was conducted, which used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 12 evaluation metrics. Expand
Findings of the 2009 Workshop on Statistical Machine Translation
TLDR
A large-scale manual evaluation of 87 machine translation systems and 22 system combination entries is conducted, which used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality, for more than 20 metrics. Expand
Findings of the 2010 Joint Workshop on Statistical Machine Translation and Metrics for Machine Translation
TLDR
A large-scale manual evaluation of 104 machine translation systems and 41 system combination entries was conducted, which used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 26 metrics. Expand
Findings of the 2011 Workshop on Statistical Machine Translation
TLDR
The WMT11 shared tasks, which included a translation task, a system combination task, and a task for machine translation evaluation metrics, show how strongly automatic metrics correlate with human judgments of translation quality for 21 evaluation metrics. Expand
Results of the WMT15 Tuning Shared Task
This paper presents the results of the WMT15 Tuning Shared Task. We provided the participants of this task with a complete machine translation system and asked them to tune its internal parametersExpand
Further Meta-Evaluation of Machine Translation
TLDR
This paper analyzes the translation quality of machine translation systems for 10 language pairs translating between Czech, English, French, German, Hungarian, and Spanish and uses the human judgments of the systems to analyze automatic evaluation metrics for translation quality. Expand
The FBK Participation in the WMT15 Automatic Post-editing Shared Task
TLDR
This paper describes the “FBK EnglishSpanish Automatic Post-editing (APE)” systems submitted to the APE shared task at the WMT 2015 and introduces some novel task-specific dense features through which improvements over the default setup of these approaches are observed. Expand
USAAR-SAPE: An English–Spanish Statistical Automatic Post-Editing System
We describe the USAAR-SAPE English‐ Spanish Automatic Post-Editing (APE) system submitted to the APE Task organized in the Workshop on Statistical Machine Translation (WMT) in 2015. Our system wasExpand
...
1
2
3
4
5
...