Regularizing Mono- and Bi-Word Models for Word Alignment

@inproceedings{Schoenemann2011RegularizingMA,
  title={Regularizing Mono- and Bi-Word Models for Word Alignment},
  author={Thomas Schoenemann},
  booktitle={IJCNLP},
  year={2011}
}
Conditional probabilistic models for word alignment are popular due to the elegant way of handling them in the training stage. However, they have weaknesses such as garbage collection and scale poorly beyond single word based models (DeNero et al., 2006): not all parameters should actually be used. To alleviate the problem, in this paper we explore regularity terms that penalize the used parameters. They share the advantages of the standard training in that iterative schemes decompose over the… CONTINUE READING

Citations

Publications citing this paper.

References

Publications referenced by this paper.

Similar Papers

Loading similar papers…