Maximum Entropy Markov Models for Information Extraction and Segmentation

Abstract

Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling sequential data, and have been applied with success to many text-related tasks, such as part-of-speech tagging, text segmentation and information extraction. In these cases, the observations are usually mod-eled as multinomial distributions over a discrete vocabulary, and the HMM parameters are set to maximize the likelihood of the observations. This paper presents a new Markovian sequence model, closely related to HMMs, that allows observations to be represented as arbitrary overlapping features (such as word, capitalization, formatting , part-of-speech), and defines the conditional probability of state sequences given observation sequences. It does this by using the maximum entropy framework to fit a set of exponential models that represent the probability of a state given an observation and the previous state. We present positive experimental results on the segmentation of FAQ's.

Extracted Key Phrases

5 Figures and Tables

Showing 1-10 of 738 extracted citations
050100'01'03'05'07'09'11'13'15'17
Citations per Year

1,198 Citations

Semantic Scholar estimates that this publication has received between 1,063 and 1,353 citations based on the available data.

See our FAQ for additional information.