Findings of the 2014 Workshop on Statistical Machine Translation
@inproceedings{Bojar2014FindingsOT, title={Findings of the 2014 Workshop on Statistical Machine Translation}, author={Ondrej Bojar and Christian Buck and Christian Federmann and Barry Haddow and Philipp Koehn and Johannes Leveling and Christof Monz and Pavel Pecina and Matt Post and Herve Saint-Amand and Radu Soricut and Lucia Specia and Ales Tamchyna}, booktitle={WMT@ACL}, year={2014} }
This paper presents the results of the WMT14 shared tasks, which included a standard news translation task, a separate medical translation task, a task for run-time estimation of machine translation quality, and a metrics task. This year, 143 machine translation systems from 23 institutions were submitted to the ten translation directions in the standard translation task. An additional 6 anonymized systems were included, and were then evaluated both automatically and manually. The quality…
Figures and Tables from this paper
figure 1 table 1 table 2 figure 2 table 3 figure 3 table 4 figure 4 table 5 table 6 table 7 table 8 table 9 table 10 table 11 table 12 table 13 table 14 table 15 table 16 table 17 table 18 table 19 table 21 table 23 table 24 table 25 table 26 table 27 table 28 table 29 table 30 table 31 table 32 table 33 table 34 table 35 table 36 table 37
487 Citations
Findings of the 2015 Workshop on Statistical Machine Translation
- Computer ScienceWMT@EMNLP
- 2015
This paper presents the results of the WMT15 shared tasks, which included a standard news translation task, a metrics task, a tuning task, a task for run-time estimation of machine translation…
Findings of the 2016 Conference on Machine Translation
- Computer ScienceWMT
- 2016
This paper presents the results of the WMT16 shared tasks, which included five machine translation (MT) tasks (standard news, IT-domain, biomedical, multimodal, pronoun), three evaluation tasks…
Findings of the 2016 Conference on Machine Translation (WMT16)
- Computer Science
- 2016
This paper presents the results of the WMT16 shared tasks, which included five machine translation (MT) tasks (standard news, IT-domain, biomedical, multimodal, pronoun), three evaluation tasks…
Findings of the WMT 2018 Shared Task on Quality Estimation
- Computer ScienceWMT
- 2018
The WMT18 shared task on Quality Estimation is reported, i.e. the task of predicting the quality of the output of machine translation systems at various granularity levels: word, phrase, sentence and document.
Findings of the Third Workshop on Neural Generation and Translation
- Computer ScienceEMNLP
- 2019
The research trends of papers presented in the proceedings are summarized and the results of the two shared tasks 1) efficient neural machine translation (NMT) and 2) document generation and translation (DGT) are described.
Findings of the Second Workshop on Neural Machine Translation and Generation
- Computer ScienceNMT@ACL
- 2018
The results of the workshop’s shared task on efficient neural machine translation are described, where participants were tasked with creating MT systems that are both accurate and efficient.
Measures of Machine Translation Quality
- Computer Science
- 2014
An annotation experiment is conducted and a manual evaluation method in which annotators rank only translations of short segments instead of whole sentences is proposed, which results in easier and more efficient annotation.
Machine Translation and Monolingual Postediting: The AFRL WMT-14 System
- LinguisticsWMT@ACL
- 2014
The AFRL statistical MT system and the improvements that were developed during the WMT14 evaluation campaign are described and the efforts to make use of monolingual English speakers to correct the output of machine translation are described.
A technical reading in statistical and neural machines translation (SMT & NMT)
- Computer Science2017 8th International Conference on Information Technology (ICIT)
- 2017
A survey of the state of the art of statistical machine translation and neural machine translation is presented, where the context of the current research studies is described, and the main strengths and limitations of the two approaches are reviewed.
Evaluating Machine Translation Quality Using Short Segments Annotations
- Computer SciencePrague Bull. Math. Linguistics
- 2015
A manual evaluation method is proposed for machine translation (MT), in which annotators rank only translations of short segments instead of whole sentences, which results in an easier and more efficient annotation.
References
SHOWING 1-10 OF 114 REFERENCES
Findings of the 2013 Workshop on Statistical Machine Translation
- Computer ScienceWMT@ACL
- 2013
We present the results of the WMT13 shared tasks, which included a translation task, a task for run-time estimation of machine translation quality, and an unofficial metrics task. This year, 143…
Findings of the 2012 Workshop on Statistical Machine Translation
- Computer Science, PsychologyWMT@NAACL-HLT
- 2012
A large-scale manual evaluation of 103 machine translation systems submitted by 34 teams was conducted, which used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 12 evaluation metrics.
Findings of the 2009 Workshop on Statistical Machine Translation
- Computer Science, PsychologyWMT@EACL
- 2009
A large-scale manual evaluation of 87 machine translation systems and 22 system combination entries is conducted, which used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality, for more than 20 metrics.
Findings of the 2011 Workshop on Statistical Machine Translation
- Computer ScienceWMT@EMNLP
- 2011
The WMT11 shared tasks, which included a translation task, a system combination task, and a task for machine translation evaluation metrics, show how strongly automatic metrics correlate with human judgments of translation quality for 21 evaluation metrics.
Experiments in Medical Translation Shared Task at WMT 2014
- Computer ScienceWMT@ACL
- 2014
Dublin City University’s (DCU) submission to the WMT 2014 Medical Summary task is described and results on the test data set in the French to English translation direction are reported.
Further Meta-Evaluation of Machine Translation
- Computer ScienceWMT@ACL
- 2008
This paper analyzes the translation quality of machine translation systems for 10 language pairs translating between Czech, English, French, German, Hungarian, and Spanish and uses the human judgments of the systems to analyze automatic evaluation metrics for translation quality.
Combining Domain Adaptation Approaches for Medical Text Translation
- Computer ScienceWMT@ACL
- 2014
A number of simple and effective techniques to adapt statistical machine translation systems in the medical domain and these systems achieve the best BLEU scores for Czech-English, EnglishGerman, French-English language pairs and the second best Blemish scores for reminding pairs are explored.
The CMU Machine Translation Systems at WMT 2014
- Computer ScienceWMT@ACL
- 2014
Inventions include: a label coarsening scheme for syntactic tree-to-tree translation, a host of new discriminative features, several modules to create “synthetic translation options” that can generalize beyond what is directly observed in the training data, and a method of combining the output of multiple word aligners to uncover extra phrase pairs and grammar rules.
Machine Translation of Medical Texts in the Khresmoi Project
- Computer ScienceWMT@ACL
- 2014
The participation of the Charles University team in the WMT 2014 Medical Translation Task is presented, with a primary goal to set up a baseline for both its subtasks and for all translation directions.
Exploring Consensus in Machine Translation for Quality Estimation
- Computer ScienceWMT@ACL
- 2014
The use of consensus among Machine Translation (MT) systems for the WMT14 Quality Estimation shared task is presented by comparing the MT system output against several alternative machine translations using standard evaluation metrics.