Source PaperYearLineSentence
P13-1104 2013 244
The Time column show how many seconds per sentence each parser takes.7 Approach UAS LAS Time Zhang and Clark (2008) 92.1 Huang and Sagae (2010) 92.1 0.04 Zhang and Nivre (2011) 92.9 91.8 0.03 Bohnet and Nivre (2012) 93.38 92.44 0.4 McDonald et al (2005) 90.9 Mcdonald and Pereira (2006) 91.5 Sagae and Lavie (2006) 92.7 Koo and Collins (2010) 93.04 Zhang and McDonald (2012) 93.06 91.86 Martins et al (2010) 93.26 Rush et al (2010) 93.8 Koo et al (2008) 93.16 Carreras et al (2008) 93.54 Bohnet and Nivre (2012) 93.67 92.68 Suzuki et al (2009) 93.79 bt = 80, bd = 80, m = 0.88 92.96 91.93 0.009 bt = 80, bd = 64, m = 0.88 92.96 91.93 0.009 bt = 80, bd = 32, m = 0.88 92.96 91.94 0.009 bt = 80, bd = 16, m = 0.88 92.96 91.94 0.008 bt = 80, bd = 8, m = 0.88 92.89 91.87 0.006 bt = 80, bd = 4, m = 0.88 92.76 91.76 0.004 bt = 80, bd = 2, m = 0.88 92.56 91.54 0.003 bt = 80, bd = 1, m = 0.88 92.26 91.25 0.002 bt = 1, bd = 1, m = 0.88 92.06 91.05 0.002Table 4: Parsing accuracies and speeds on the En glish evaluation set, excluding tokens containing only punctuation
P13-2109 2013 8
Third-order fea tures have also been included in transition systems (Zhang and Nivre, 2011) and graph-based parsers with cube-pruning (Zhang and McDonald, 2012)
P13-2109 2013 151
includes the most accurate parsers among Nivre et al (2006), McDonald et al (2006), Martins et al (2010, 2011), Koo et al (2010), Rush and Petrov (2012), Zhang and McDonald (2012)
P13-2109 2013 132
Zhang and McDonald (2012) 93.06 220 This work (PTB-S ?23, 3rd ord) 92.82 604 Rush and Petrov (2012) 92.7? 4,460 Table 2: Results for the projective English dataset.We report unlabeled attachment scores (UAS) ig noring punctuation, and parsing speeds in tokens per second
P14-1019 2014 49
Its main disadvan tage is that the output parse can only be one of the few parses passed to the reranker.Recent work has focused on more powerful in ference mechanisms that consider the full search space (Zhang and McDonald, 2012; Rush and Petrov, 2012; Koo et al, 2010; Huang, 2008)
P14-1019 2014 288
includes the most accurate parsers among Nivre et al (2006), McDonald et al (2006), Martins et al (2010), Martins et al (2011), Martins et al (2013), Koo et al (2010), Rush and Petrov (2012), Zhang and McDonald (2012) and Zhang et al (2013)
P14-1043 2014 11
For example, Koo and Collins (2010) and Zhang and McDonald (2012) show that incorporating higher-order features into a graph-based parser only leads to modest increase in parsing accuracy
P14-1043 2014 204
463 Sup Semi McDonald and Pereira (2006) 91.5 ?Koo and Collins (2010) [higher-order] 93.04 Zhang and McDonald (2012) [higher-order] 93.06 Zhang and Nivre (2011) [higher-order] 92.9 Koo et al (2008) [higher-order] 92.02 93.16 Chen et al (2009) [higher-order] 92.40 93.16 Suzuki et al (2009) [higher-order,cluster] 92.70 93.79 Zhou et al (2011) [higher-order] 91.98 92.64 Chen et al (2013) [higher-order] 92.76 93.77 This work 92.34 93.19 Table 4: UAS comparison on English test data
P14-1043 2014 212
Zhang and McDonald (2012) explore higher-order features for graph-based de pendency parsing, and adopt beam search for fast decoding
P14-1130 2014 38
Whilein most state-of-the-art parsers, features are selected manually (McDonald et al, 2005a; McDonald et al, 2005b; Koo and Collins, 2010; Mar tins et al, 2013; Zhang and McDonald, 2012a;Rush and Petrov, 2012a), automatic feature selec tion methods are gaining popularity (Martins et al, 2011b; Ballesteros and Nivre, 2012; Nilsson andNugues, 2010; Ballesteros, 2013)
P14-1130 2014 226
(2006), McDonald et al (2006), Martins et al (2010), Martins et al (2011a), Martins et al (2013), Koo et al (2010), Rush and Petrov (2012b), Zhang and McDonald (2012b) and Zhang et al (2013).and S?ensoy, 2013), learned from raw data (Glober son et al, 2007; Maron et al, 2010)
P14-2107 2014 27
Our starting point is the cube-pruned dependency parsing model of Zhang and McDonald (2012) (self citation)
P14-2107 2014 82
More details of these features are describedin Zhang and McDonald (2012) (self citation)
P14-2107 2014 5
Thishas led to work on approximate inference, typ ically via pruning (Bergsma and Cherry, 2010; Rush and Petrov, 2012; He et al, 2013) Recently, it has been shown that cube-pruning(Chiang, 2007) can efficiently introduce higher order dependencies in graph-based parsing (Zhang and McDonald, 2012) (self citation)
P14-2107 2014 33
Instead of storing all signatures, Zhang and McDonald(2012) store the current k-best in a beam (self citation)