Paper: Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors

ACL ID P14-1023
Title Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors
Venue Annual Meeting of the Association of Computational Linguistics
Session Main Conference
Year 2014
Authors

Context-predicting models (more com- monly known as embeddings or neural language models) are the new kids on the distributional semantics block. Despite the buzz surrounding these models, the litera- ture is still lacking a systematic compari- son of the predictive models with classic, count-vector-based distributional semantic approaches. In this paper, we perform such an extensive evaluation, on a wide range of lexical semantics tasks and across many parameter settings. The results, to our own surprise, show that the buzz is fully justified, as the context-predicting models obtain a thorough and resounding victory against their count-based counter- parts.