Paper: Multilingual Summarization Evaluation without Human Models

ACL ID C10-2122
Title Multilingual Summarization Evaluation without Human Models
Venue International Conference on Computational Linguistics
Session Poster Session
Year 2010
Authors

We study correlation of rankings of text summarization systems using evaluation methods with and without human mod- els. We apply our comparison frame- work to various well-established content- based evaluation measures in text sum- marization such as coverage, Responsive- ness, Pyramids and ROUGE studying their associationsinvarioustextsummarization tasks including generic and focus-based multi-documentsummarizationinEnglish and generic single-document summariza- tion in French and Spanish. The research is carried out using a new content-based evaluation framework called FRESA to compute a variety of divergences among probability distributions.