Paper: Neutralizing Linguistically Problematic Annotations in Unsupervised Dependency Parsing Evaluation

ACL ID P11-1067
Title Neutralizing Linguistically Problematic Annotations in Unsupervised Dependency Parsing Evaluation
Venue Annual Meeting of the Association of Computational Linguistics
Session Main Conference
Year 2011
Authors

Dependency parsing is a central NLP task. In this paper we show that the common eval- uation for unsupervised dependency parsing is highly sensitive to problematic annotations. We show that for three leading unsupervised parsers (Klein and Manning, 2004; Cohen and Smith, 2009; Spitkovsky et al., 2010a), a small set of parameters can be found whose mod- ification yields a significant improvement in standard evaluation measures. These param- eters correspond to local cases where no lin- guistic consensus exists as to the proper gold annotation. Therefore, the standard evaluation does not provide a true indication of algorithm quality. We present a new measure, Neutral Edge Direction (NED), and show that it greatly reduces this undesired phenomenon.