Paper: Finding Good Sequential Model Structures using Output Transformations

ACL ID D07-1084
Title Finding Good Sequential Model Structures using Output Transformations
Venue Conference on Empirical Methods in Natural Language Processing
Session Main Conference
Year 2007
Authors
  • Edward Loper (University of Pennsylvania, Philadelphia PA)

In Sequential Viterbi Models, such as HMMs, MEMMs, and Linear Chain CRFs, the type of patterns over output sequences that can be learned by the model depend di- rectly on the model’s structure: any pattern that spans more output tags than are covered by the models’ order will be very difficult to learn. However, increasing a model’s or- der can lead to an increase in the number of model parameters, making the model more susceptible to sparse data problems. This paper shows how the notion of output transformation can be used to explore a va- riety of alternative model structures. Us- ing output transformations, we can selec- tively increase the amount of contextual in- formation available for some conditions, but not for others, thus allowing us to capture longer-distance consistencie...