Paper: Intelligent Selection of Language Model Training Data

ACL ID P10-2041
Title Intelligent Selection of Language Model Training Data
Venue Annual Meeting of the Association of Computational Linguistics
Session Short Paper
Year 2010
Authors

We address the problem of selecting non- domain-specific language model training data to build auxiliary language models for use in tasks such as machine transla- tion. Our approach is based on comparing the cross-entropy, according to domain- specific and non-domain-specifc language models, for each sentence of the text source used to produce the latter language model. We show that this produces better language models, trained on less data, than both random data selection and two other previously proposed methods.