Paper: Measuring Sentiment Annotation Complexity of Text

ACL ID P14-2007
Title Measuring Sentiment Annotation Complexity of Text
Venue Annual Meeting of the Association of Computational Linguistics
Session Main Conference
Year 2014
Authors

The effort required for a human annota- tor to detect sentiment is not uniform for all texts, irrespective of his/her expertise. We aim to predict a score that quantifies this effort, using linguistic properties of the text. Our proposed metric is called Sentiment Annotation Complexity (SAC). As for training data, since any direct judg- ment of complexity by a human annota- tor is fraught with subjectivity, we rely on cognitive evidence from eye-tracking. The sentences in our dataset are labeled with SAC scores derived from eye-fixation du- ration. Using linguistic features and anno- tated SACs, we train a regressor that pre- dicts the SAC with a best mean error rate of 22.02% for five-fold cross-validation. We also study the correlation between a hu- man annotator?s perception of complexi...