Paper: Unification-Based Multimodal Integration

ACL ID P97-1036
Title Unification-Based Multimodal Integration
Venue Annual Meeting of the Association of Computational Linguistics
Session Main Conference
Year 1997
Authors

Recent empirical research has shown con- clusive advantages of multimodal interac- tion over speech-only interaction for map- based tasks. This paper describes a mul- timodal language processing architecture which supports interfaces allowing simulta- neous input from speech and gesture recog- nition. Integration of spoken and gestural input is driven by unification of typed fea- ture structures representing the semantic contributions of the different modes. This integration method allows the component modalities to mutually compensate for each others' errors. It is implemented in Quick- Set, a multimodal (pen/voice) system that enables users to set up and control dis- tributed interactive simulations.