Paper: Learning Grounded Meaning Representations with Autoencoders

ACL ID P14-1068
Title Learning Grounded Meaning Representations with Autoencoders
Venue Annual Meeting of the Association of Computational Linguistics
Session Main Conference
Year 2014
Authors

In this paper we address the problem of grounding distributional representations of lexical meaning. We introduce a new model which uses stacked autoencoders to learn higher-level embeddings from tex- tual and visual input. The two modali- ties are encoded as vectors of attributes and are obtained automatically from text and images, respectively. We evaluate our model on its ability to simulate similar- ity judgments and concept categorization. On both tasks, our approach outperforms baselines and related models.