Paper: Unsupervised Feature Learning for Visual Sign Language Identification

ACL ID P14-2061
Title Unsupervised Feature Learning for Visual Sign Language Identification
Venue Annual Meeting of the Association of Computational Linguistics
Session Main Conference
Year 2014
Authors

Prior research on language identification fo- cused primarily on text and speech. In this paper, we focus on the visual modality and present a method for identifying sign lan- guages solely from short video samples. The method is trained on unlabelled video data (un- supervised feature learning) and using these features, it is trained to discriminate between six sign languages (supervised learning). We ran experiments on short video samples in- volving 30 signers (about 6 hours in total). Us- ing leave-one-signer-out cross-validation, our evaluation shows an average best accuracy of 84%. Given that sign languages are under- resourced, unsupervised feature learning tech- niques are the right tools and our results indi- cate that this is realistic for sign language iden- tification.