Paper: Learning Stochastic OT Grammars: A Bayesian Approach Using Data Augmentation And Gibbs Sampling

ACL ID P05-1043
Title Learning Stochastic OT Grammars: A Bayesian Approach Using Data Augmentation And Gibbs Sampling
Venue Annual Meeting of the Association of Computational Linguistics
Session Main Conference
Year 2005
Authors
  • Ying Lin (University of California at Los Angeles, Los Angeles CA)

Stochastic Optimality Theory (Boersma, 1997) is a widely-used model in linguis- tics that did not have a theoretically sound learning method previously. In this pa- per, a Markov chain Monte-Carlo method is proposed for learning Stochastic OT Grammars. Following a Bayesian frame- work, the goal is finding the posterior dis- tribution of the grammar given the rela- tive frequencies of input-output pairs. The Data Augmentation algorithm allows one to simulate a joint posterior distribution by iterating two conditional sampling steps. This Gibbs sampler constructs a Markov chain that converges to the joint distribu- tion, and the target posterior can be de- rived as its marginal distribution.