Corpus ID: 218763440

Hidden Markov Chains, Entropic Forward-Backward, and Part-Of-Speech Tagging

@article{Azeraf2020HiddenMC,
  title={Hidden Markov Chains, Entropic Forward-Backward, and Part-Of-Speech Tagging},
  author={Elie Azeraf and Emmanuel Monfrini and Emmanuel Vignon and Wojciech Pieczynski},
  journal={ArXiv},
  year={2020},
  volume={abs/2005.10629}
}
  • Elie Azeraf, Emmanuel Monfrini, +1 author Wojciech Pieczynski
  • Published 2020
  • Mathematics, Computer Science
  • ArXiv
  • The ability to take into account the characteristics - also called features - of observations is essential in Natural Language Processing (NLP) problems. Hidden Markov Chain (HMC) model associated with classic Forward-Backward probabilities cannot handle arbitrary features like prefixes or suffixes of any size, except with an independence condition. For twenty years, this default has encouraged the development of other sequential models, starting with the Maximum Entropy Markov Model (MEMM… CONTINUE READING

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 79 REFERENCES