Regularizing Mono- and Bi-Word Models for Word Alignment

  title={Regularizing Mono- and Bi-Word Models for Word Alignment},
  author={Thomas Schoenemann},
Conditional probabilistic models for word alignment are popular due to the elegant way of handling them in the training stage. However, they have weaknesses such as garbage collection and scale poorly beyond single word based models (DeNero et al., 2006): not all parameters should actually be used. To alleviate the problem, in this paper we explore regularity terms that penalize the used parameters. They share the advantages of the standard training in that iterative schemes decompose over the… CONTINUE READING


Publications citing this paper.


Publications referenced by this paper.

Similar Papers

Loading similar papers…