Paper: Training Dependency Parser Using Light Feedback

ACL ID N12-1053
Title Training Dependency Parser Using Light Feedback
Venue Annual Conference of the North American Chapter of the Association for Computational Linguistics
Session Main Conference
Year 2012
Authors

We introduce lightly supervised learning for dependency parsing. In this paradigm, the al- gorithm is initiated with a parser, such as one that was built based on a very limited amount of fully annotated training data. Then, the al- gorithm iterates over unlabeled sentences and asks only for a single bit of feedback, rather than a full parse tree. Specifically, given an example the algorithm outputs two possible parse trees and receives only a single bit indi- cating which of the two alternatives has more correct edges. There is no direct information about the correctness of any edge. We show on dependency parsing tasks in 14 languages that with only 1% of fully labeled data, and light-feedback on the remaining 99% of the training data, our algorithm achieves, on av- erage, only 5% lower p...