Paper: Optimized Online Rank Learning for Machine Translation

ACL ID N12-1026
Title Optimized Online Rank Learning for Machine Translation
Venue Annual Conference of the North American Chapter of the Association for Computational Linguistics
Session Main Conference
Year 2012
Authors

We present an online learning algorithm for statistical machine translation (SMT) based on stochastic gradient descent (SGD). Under the online setting of rank learning, a corpus-wise loss has to be approximated by a batch lo- cal loss when optimizing for evaluation mea- sures that cannot be linearly decomposed into a sentence-wise loss, such as BLEU. We pro- pose a variant of SGD with a larger batch size in which the parameter update in each iteration is further optimized by a passive-aggressive algorithm. Learning is efficiently parallelized and line search is performed in each round when merging parameters across parallel jobs. Experiments on the NIST Chinese-to-English Open MT task indicate significantly better translation results.