Paper: Utterance-Level Multimodal Sentiment Analysis

ACL ID P13-1096
Title Utterance-Level Multimodal Sentiment Analysis
Venue Annual Meeting of the Association of Computational Linguistics
Session Main Conference
Year 2013
Authors

During real-life interactions, people are naturally gesturing and modulating their voice to emphasize specific points or to express their emotions. With the recent growth of social websites such as YouTube, Facebook, and Amazon, video reviews are emerging as a new source of multimodal and natural opinions that has been left al- most untapped by automatic opinion anal- ysis techniques. This paper presents a method for multimodal sentiment classi- fication, which can identify the sentiment expressed in utterance-level visual datas- treams. Using a new multimodal dataset consisting of sentiment annotated utter- ances extracted from video reviews, we show that multimodal sentiment analysis can be effectively performed, and that the joint use of visual, acoustic, and linguistic modalities can l...