Information Extraction over Structured Data: Question Answering with Freebase

Abstract

Answering natural language questions using the Freebase knowledge base has recently been explored as a platform for advancing the state of the art in open domain semantic parsing. Those efforts map questions to sophisticated meaning representations that are then attempted to be matched against viable answer candidates in the knowledge base. Here we show that relatively modest information extraction techniques, when paired with a web-scale corpus, can outperform these sophisticated approaches by roughly 34% relative gain.

Extracted Key Phrases

6 Figures and Tables

Showing 1-10 of 42 references

11 billion clues in 800 million documents: A web research corpus annotated with freebase concepts. http://googleresearch.blogspot.com/2013/07/11- billion-clues-in-800-million

  • Dave Orr, Amar Subramanya, Evgeniy Gabrilovich, Michael Ringgaard
  • 2013
1 Excerpt

FACC1: Freebase annotation of ClueWeb corpora, Version 1 (Release date 2013-06-26, Format version 1, Correction level 0)

  • Evgeniy Gabrilovich, Michael Ringgaard, Amarnag Subramanya
  • 2013
1 Excerpt

Freebase Search API. https://developers.google.com/freebase/v1/search- overview

  • Freebase
  • 2013
3 Excerpts

Freebase Topic API. https://developers.google.com/freebase/v1/topic- overview

  • Freebase
  • 2013
3 Excerpts
Showing 1-10 of 95 extracted citations
02040602014201520162017
Citations per Year

130 Citations

Semantic Scholar estimates that this publication has received between 101 and 175 citations based on the available data.

See our FAQ for additional information.