Paper: Factored Neural Language Models

ACL ID N06-2001
Title Factored Neural Language Models
Venue Human Language Technologies
Session Short Paper
Year 2006
Authors

We present a new type of neural proba- bilistic language model that learns a map- ping from both words and explicit word features into a continuous space that is then used for word prediction. Addi- tionally, we investigate several ways of deriving continuous word representations for unknown words from those of known words. The resulting model signi cantly reduces perplexity on sparse-data tasks when compared to standard backoff mod- els, standard neural language models, and factored language models.