Paper: Crowdsourcing Inference-Rule Evaluation

ACL ID P12-2031
Title Crowdsourcing Inference-Rule Evaluation
Venue Annual Meeting of the Association of Computational Linguistics
Session Short Paper
Year 2012
Authors

The importance of inference rules to semantic applications has long been recognized and ex- tensive work has been carried out to automat- ically acquire inference-rule resources. How- ever, evaluating such resources has turned out to be a non-trivial task, slowing progress in the field. In this paper, we suggest a framework for evaluating inference-rule resources. Our framework simplifies a previously proposed ?instance-based evaluation? method that in- volved substantial annotator training, making it suitable for crowdsourcing. We show that our method produces a large amount of an- notations with high inter-annotator agreement for a low cost at a short period of time, without requiring training expert annotators.