Open-Vocabulary Semantic Parsing with both Distributional Statistics and Formal Knowledge

Abstract

Traditional semantic parsers map language onto compositional, executable queries in a fixed schema. This mapping allows them to effectively leverage the information contained in large, formal knowledge bases (KBs, e.g., Freebase) to answer questions, but it is also fundamentally limiting— these semantic parsers can only assign meaning to language that falls within the KB’s manually-produced schema. Recently proposed methods for open vocabulary semantic parsing overcome this limitation by learning execution models for arbitrary language, essentially using a text corpus as a kind of knowledge base. However, all prior approaches to open vocabulary semantic parsing replace a formal KB with textual information, making no use of the KB in their models. We show how to combine the disparate representations used by these two approaches, presenting for the first time a semantic parser that (1) produces compositional, executable representations of language, (2) can successfully leverage the information contained in both a formal KB and a large corpus, and (3) is not limited to the schema of the underlying KB. We demonstrate significantly improved performance over state-of-the-art baselines on an open-domain natural language question answering task.

Extracted Key Phrases

6 Figures and Tables

Cite this paper

@inproceedings{Gardner2017OpenVocabularySP, title={Open-Vocabulary Semantic Parsing with both Distributional Statistics and Formal Knowledge}, author={Matthew Gardner and Jayant Krishnamurthy}, booktitle={AAAI}, year={2017} }