Paper: Scalable Inference And Training Of Context-Rich Syntactic Translation Models

ACL ID P06-1121
Title Scalable Inference And Training Of Context-Rich Syntactic Translation Models
Venue Annual Meeting of the Association of Computational Linguistics
Session Main Conference
Year 2006
Authors

Statistical MT has made great progress in the last few years, but current translation models are weak on re-ordering and target language fluency. Syn- tactic approaches seek to remedy these problems. In this paper, we take the framework for acquir- ing multi-level syntactic translation rules of (Gal- ley et al. , 2004) from aligned tree-string pairs, and present two main extensions of their approach: first, instead of merely computing a single derivation that minimally explains a sentence pair, we construct a large number of derivations that include contex- tually richer rules, and account for multiple inter- pretations of unaligned words. Second, we pro- pose probability estimates and a training procedure for weighting these rules. We contrast different approaches on real examples, show t...