Source PaperYearLineSentence
W11-2506 2011 29
These worksadhere to a fully latent representation of meaning, whereas Hartung and Frank (2010) assign sym bolic attribute meanings to adjectives, nouns andcomposed phrases by incorporating attributes as di mensions in a compositional VSM (self citation)
W11-2506 2011 37
topic distributions inferred from LDA into a VSM alleviates sparsity problems that persisted with the pattern-based VSM of Hartung and Frank (2010) (self citation)
W11-2506 2011 59
This is in line with the findingsof Hartung and Frank (2010), who obtained sub stantial performance improvements by splitting the triples into separate binary relations (self citation)
W11-2506 2011 141
Hartung and Frank (2010)).The results of this analysis are displayed in Ta ble 3 (self citation)
D11-1050 2011 147
PATTSVM is reconstructed from Hartung and Frank (2010) (self citation)
D11-1050 2011 10
An example of the latter are topic models (Blei et al, 2003), which have recently been applied to modeling selectional preferences of verbs (Ritter etal., 2010; ?O Se?aghdha, 2010), or word sense disam biguation (Li et al, 2010).A topic that is increasingly studied in distribu tional semantics is the semantics of adjectives, both in isolation (Almuhareb, 2006) and in compositional adjective-noun phrases (Hartung and Frank, 2010; Guevara, 2010; Baroni and Zamparelli, 2010) (self citation)
D11-1050 2011 15
Hartung and Frank (2010)were the first to model this insight in a VSM by representing the meaning of adjectives and nouns in se mantic vectors defined over attributes (self citation)
D11-1050 2011 155
The first experiment is conducted on the data set used in Hartung and Frank (2010) (self citation)
D11-1050 2011 189
The LDAESel,+ models outperform the PATTVSMESel,+ model of Hartung and Frank (2010) by a high margin in f-score: +0.14 for C-LDA; +0.08 for L-LDA (self citation)
D11-1050 2011 23
Through this combination, we attempt to improve on earlier work in Almuhareb (2006) and Hartung and Frank (2010), which are both embedded in a purely distributional setting (self citation)
D11-1050 2011 25
Following Hartung and Frank (2010), this model isembedded into a VSM that employs vector com position to combine the meaning of adjectives and nouns (self citation)
D11-1050 2011 27
Both will be presented in detail in Section 3.Our aims in this paper are two-fold: (i) We inves tigate LDA as a modeling framework in the attribute selection task, as its use of topics as latent variables may alleviate inherent sparsity problems faced by prior work using pattern-based (Almuhareb, 2006) or vector space models (Hartung and Frank, 2010) (self citation)
D11-1050 2011 28
(ii) While these prior approaches were restricted to a confined set of 10 attributes, we will we apply our1The figure is adopted from the distributional setting of Hartung and Frank (2010), with component values defined by pat tern frequency counts for the chosen attribute nouns (self citation)
D11-1050 2011 125
For integrating the information obtained from CLDA or L-LDA into a distributional VSM, we fol low Hartung and Frank (2010): Adjectives and nouns are modeled as independent semantic vectorsalong their relationship to attributes; the most promi nent attribute(s) that represent the hidden meaning of adjective-noun phrases are selected from their composition (cf (self citation)