Paper: Revisiting Embedding Features for Simple Semi-supervised Learning

ACL ID D14-1012
Title Revisiting Embedding Features for Simple Semi-supervised Learning
Venue Conference on Empirical Methods in Natural Language Processing
Session Main Conference
Year 2014

Recent work has shown success in us- ing continuous word embeddings learned from unlabeled data as features to improve supervised NLP systems, which is re- garded as a simple semi-supervised learn- ing mechanism. However, fundamen- tal problems on effectively incorporating the word embedding features within the framework of linear models remain. In this study, we investigate and analyze three different approaches, including a new pro- posed distributional prototype approach, for utilizing the embedding features. The presented approaches can be integrated into most of the classical linear models in NLP. Experiments on the task of named entity recognition show that each of the proposed approaches can better utilize the word embedding features, among which the distributional prototype approac...