Recognising Textual Entailment with Robust Logical Inference

Abstract

We use logical inference techniques for recognising textual entailment, with theorem proving operating on deep semantic interpretations as the backbone of our system. However, the performance of theorem proving on its own turns out to be highly dependent on a wide range of background knowledge, which is not necessarily included in publically available knowledge sources. Therefore, we achieve robustness via two extensions. Firstly, we incorporate model building, a technique borrowed from automated reasoning, and show that it is a useful robust method to approximate entailment. Secondly, we use machine learning to combine these deep semantic analysis techniques with simple shallow word overlap. The resulting hybrid model achieves high accuracy on the RTE testset, given the state of the art. Our results also show that the various techniques that we employ perform very differently on some of the subsets of the RTE corpus and as a result, it is useful to use the nature of the dataset as a feature.

DOI: 10.1007/11736790_23

Extracted Key Phrases

1 Figure or Table

01020'06'07'08'09'10'11'12'13'14'15'16'17
Citations per Year

168 Citations

Semantic Scholar estimates that this publication has 168 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Bos2005RecognisingTE, title={Recognising Textual Entailment with Robust Logical Inference}, author={Johan Bos and Katja Markert}, booktitle={MLCW}, year={2005} }