Paper: Learning Abstract Concept Embeddings from Multi-Modal Data: Since You Probably Can't See What I Mean

ACL ID D14-1032
Title Learning Abstract Concept Embeddings from Multi-Modal Data: Since You Probably Can't See What I Mean
Venue Conference on Empirical Methods in Natural Language Processing
Session Main Conference
Year 2014
Authors

Learning Abstract Concept Embeddings from Multi-Modal Data: Since You Probably Can?t See What I Mean Felix Hill Computer Laboratory University of Cambridge felix.hill@cl.cam.ac.uk Anna Korhonen Computer Laboratory University of Cambridge anna.korhonen@cl.cam.ac.uk Abstract Models that acquire semantic represen- tations from both linguistic and percep- tual input are of interest to researchers in NLP because of the obvious parallels with human language learning. Perfor- mance advantages of the multi-modal ap- proach over language-only models have been clearly established when models are required to learn concrete noun concepts. However, such concepts are comparatively rare in everyday language. In this work, we present a new means of extending the scope of multi-modal models to more commonl...