Paper: Fast and Scalable Decoding with Language Model Look-Ahead for Phrase-based Statistical Machine Translation

ACL ID P12-2006
Title Fast and Scalable Decoding with Language Model Look-Ahead for Phrase-based Statistical Machine Translation
Venue Annual Meeting of the Association of Computational Linguistics
Session Short Paper
Year 2012
Authors

In this work we present two extensions to the well-known dynamic programming beam search in phrase-based statistical machine translation (SMT), aiming at increased effi- ciency of decoding by minimizing the number of language model computations and hypothe- sis expansions. Our results show that language model based pre-sorting yields a small im- provement in translation quality and a speedup by a factor of 2. Two look-ahead methods are shown to further increase translation speed by a factor of 2 without changing the search space and a factor of 4 with the side-effect of some additional search errors. We compare our ap- proach with Moses and observe the same per- formance, but a substantially better trade-off between translation quality and speed. At a speed of roughly 70 words per second, ...