Paper: Augmenting The Kappa Statistic To Determine Interannotator Reliability For Multiply Labeled Data Points

ACL ID N04-4020
Title Augmenting The Kappa Statistic To Determine Interannotator Reliability For Multiply Labeled Data Points
Venue Human Language Technologies
Session Short Paper
Year 2004
Authors

This paper describes a method for evaluating interannotator reliability in an email corpus annotated for type (e.g. , question, answer, so- cial chat) when annotators are allowed to as- sign multiple labels to a message. An augmentation is proposed to Cohen’s kappa statistic which permits all data to be included in the reliability measure and which further permits the identification of more or less re- liably annotated data points.