Paper: Learning to Model Domain-Specific Utterance Sequences for Extractive Summarization of Contact Center Dialogues

ACL ID C10-2046
Title Learning to Model Domain-Specific Utterance Sequences for Extractive Summarization of Contact Center Dialogues
Venue International Conference on Computational Linguistics
Session Poster Session
Year 2010
Authors

This paper proposes a novel extractive summarization method for contact cen- ter dialogues. We use a particular type of hidden Markov model (HMM) called Class Speaker HMM (CSHMM), which processes operator/caller utterance sequences of multiple domains simulta- neously to model domain-specific utter- ance sequences and common (domain- wide) sequences at the same time. We applied the CSHMM to call summariza- tion of transcripts in six different con- tact center domains and found that our methodsignificantlyoutperformscompet- itive baselines based on the maximum coverage ofimportantwordsusinginteger linear programming.