Paper: An Automatic Method For Summary Evaluation Using Multiple Evaluation Results By A Manual Method

ACL ID P06-2078
Title An Automatic Method For Summary Evaluation Using Multiple Evaluation Results By A Manual Method
Venue Annual Meeting of the Association of Computational Linguistics
Session Poster Session
Year 2006
Authors

To solve a problemof how to evaluate computer-produced summaries, a number of automaticand manual methods have been proposed. Manual methods evaluate summaries correctly, because humans evaluate them, but are costly. On the other hand, automatic methods, which use evaluation tools or programs, are low cost, although these methods cannot evaluate summaries as accurately as manual methods. In this paper, we investigate an automatic evaluation method that can reduce the errors of traditional automatic methods by using several evaluation results obtained manually. We conducted some experiments using the data of the Text Summarization Challenge 2 (TSC-2). A comparison with conventional automatic methods shows that our method outperforms other methods usually used.