Paper: Vine Pruning for Efficient Multi-Pass Dependency Parsing

ACL ID N12-1054
Title Vine Pruning for Efficient Multi-Pass Dependency Parsing
Venue Annual Conference of the North American Chapter of the Association for Computational Linguistics
Session Main Conference
Year 2012
Authors

Coarse-to-fine inference has been shown to be a robust approximate method for improving the efficiency of structured prediction models while preserving their accuracy. We propose a multi-pass coarse-to-fine architecture for de- pendency parsing using linear-time vine prun- ing and structured prediction cascades. Our first-, second-, and third-order models achieve accuracies comparable to those of their un- pruned counterparts, while exploring only a fraction of the search space. We observe speed-ups of up to two orders of magnitude compared to exhaustive search. Our pruned third-order model is twice as fast as an un- pruned first-order model and also compares favorably to a state-of-the-art transition-based parser for multiple languages.