Corpus ID: 5189165

LEPOR: A Robust Evaluation Metric for Machine Translation with Augmented Factors

@inproceedings{Han2012LEPORAR,
  title={LEPOR: A Robust Evaluation Metric for Machine Translation with Augmented Factors},
  author={Lifeng Han and Derek F. Wong and Lidia S. Chao},
  booktitle={COLING},
  year={2012}
}
In the conventional evaluation metrics of machine translation, considering less information about the translations usually makes the result not reasonable and low correlation with human judgments. On the other hand, using many external linguistic resources and tools (e.g. Part-ofspeech tagging, morpheme, stemming, and synonyms) makes the metrics complicated, timeconsuming and not universal due to that different languages have the different linguistic features. This paper proposes a novel… Expand
36 Citations
LEPOR: An Augmented Machine Translation Evaluation Metric
  • 8
  • Highly Influenced
  • PDF
Unsupervised Quality Estimation Model for English to German Translation and Its Application in Extensive Supervised Evaluation
  • 7
  • PDF
Adequacy–Fluency Metrics: Evaluating MT in the Continuous Space Model Framework
  • 52
A Description of Tunable Machine Translation Evaluation Systems in WMT13 Metrics Task
  • 9
  • PDF
Automatic Machine Translation Evaluation with Part-of-Speech Information
  • 2
  • PDF
How to evaluate machine translation: A review of automated and human metrics
  • 6
Machine Translation Evaluation: A Survey
  • 9
  • PDF
Machine Translation Evaluation Resources and Methods: A Survey.
  • 14
  • Highly Influenced
  • PDF
RGraph: Generating Reference Graphs for Better Machine Translation Evaluation
  • PDF
...
1
2
3
4
...

References

SHOWING 1-10 OF 22 REFERENCES
Evaluation without references: IBM1 scores as evaluation metrics
  • 28
  • PDF
Automatic evaluation of machine translation quality using n-gram co-occurrence statistics
  • 1,418
  • PDF
METEOR: An Automatic Metric for MT Evaluation with High Levels of Correlation with Human Judgments
  • 657
  • PDF
A Study of Translation Edit Rate with Targeted Human Annotation
  • 1,947
  • Highly Influential
  • PDF
Automatic Evaluation of Translation Quality for Distant Language Pairs
  • 276
  • PDF
Linguistic Features for Automatic Evaluation of Heterogenous MT Systems
  • 135
  • PDF
A Lightweight Evaluation Framework for Machine Translation Reordering
  • 42
  • PDF
Evaluation of Machine Translation Metrics for Czech as the Target Language
  • 25
  • PDF
Meteor 1.3: Automatic Metric for Reliable Optimization and Evaluation of Machine Translation Systems
  • 352
  • Highly Influential
  • PDF
...
1
2
3
...