Paper: Finding Good Enough: A Task-Based Evaluation of Query Biased Summarization for Cross-Language Information Retrieval

ACL ID D14-1073
Title Finding Good Enough: A Task-Based Evaluation of Query Biased Summarization for Cross-Language Information Retrieval
Venue Conference on Empirical Methods in Natural Language Processing
Session Main Conference
Year 2014
Authors

In this paper we present our task-based evaluation of query biased summarization for cross-language information retrieval (CLIR) using relevance prediction. We de- scribe our 13 summarization methods each from one of four summarization strate- gies. We show how well our methods perform using Farsi text from the CLEF 2008 shared-task, which we translated to English automtatically. We report preci- sion/recall/F1, accuracy and time-on-task. We found that different summarization methods perform optimally for different evaluation metrics, but overall query bi- ased word clouds are the best summariza- tion strategy. In our analysis, we demon- strate that using the ROUGE metric on our sentence-based summaries cannot make the same kinds of distinctions as our evalu- ation framework does. Finally,...