Paper: Incorporating Gesture And Gaze Into Multimodal Models Of Human-To-Human Communication

ACL ID N06-3001
Title Incorporating Gesture And Gaze Into Multimodal Models Of Human-To-Human Communication
Venue Human Language Technologies
Session Doctoral Consortium
Year 2006
Authors
  • Lei Chen (Purdue University, West Lafayette IN)

Structural information in language is im- portant for obtaining a better understand- ing of a human communication (e.g. , sen- tence segmentation, speaker turns, and topic segmentation). Human communica- tion involves a variety of multimodal be- haviors that signal both propositional con- tent and structure, e.g., gesture, gaze, and body posture. These non-verbal signals have tight temporal and semantic links to spoken content. In my thesis, I am work- ing on incorporating non-verbal cues into a multimodal model to better predict the structural events to further improve the understanding of human communication. Some research results are summarized in this document and my future research plan is described.