Paper: Re-Evaluating Machine Translation Results With Paraphrase Support

ACL ID W06-1610
Title Re-Evaluating Machine Translation Results With Paraphrase Support
Venue Conference on Empirical Methods in Natural Language Processing
Session Main Conference
Year 2006
Authors

In this paper, we present ParaEval, an automatic evaluation framework that uses paraphrases to improve the quality of machine translation evaluations. Previous work has focused on fixed n-gram evaluation metrics coupled with lexical identity matching. ParaEval addresses three important issues: support for para- phrase/synonym matching, recall meas- urement, and correlation with human judgments. We show that ParaEval corre- lates significantly better than BLEU with human assessment in measurements for both fluency and adequacy.