Paper: Baselines and Bigrams: Simple, Good Sentiment and Topic Classification

ACL ID P12-2018
Title Baselines and Bigrams: Simple, Good Sentiment and Topic Classification
Venue Annual Meeting of the Association of Computational Linguistics
Session Short Paper
Year 2012
Authors

Variants of Naive Bayes (NB) and Support Vector Machines (SVM) are often used as baseline methods for text classification, but their performance varies greatly depending on the model variant, features used and task/ dataset. We show that: (i) the inclusion of word bigram features gives consistent gains on sentiment analysis tasks; (ii) for short snippet sentiment tasks, NB actually does better than SVMs (while for longer documents the oppo- site result holds); (iii) a simple but novel SVM variant using NB log-count ratios as feature values consistently performs well across tasks and datasets. Based on these observations, we identify simple NB and SVM variants which outperform most published results on senti- ment analysis datasets, sometimes providing a new state-of-the-art performance lev...