The Web as a Knowledge-Base for Answering Complex Questions

@inproceedings{Talmor2018TheWA,
  title={The Web as a Knowledge-Base for Answering Complex Questions},
  author={Alon Talmor and Jonathan Berant},
  booktitle={NAACL},
  year={2018}
}
Answering complex questions is a time-consuming activity for humans that requires reasoning and integration of information. [...] Key Method To illustrate the viability of our approach, we create a new dataset of complex questions, ComplexWebQuestions, and present a model that decomposes questions and interacts with the web to compute an answer. We empirically demonstrate that question decomposition improves performance from 20.8 precision@1 to 27.5 precision@1 on this new dataset.Expand
Learning to Answer Complex Questions over Knowledge Bases with Query Composition
TLDR
A KB-QA system, TextRay, is proposed, which answers complex questions using a novel decompose-execute-join approach and uses a semantic matching model which is able to learn simple queries using implicit supervision from question-answer pairs, thus eliminating the need for complex query patterns. Expand
Complex Knowledge Base Question Answering: A Survey
Knowledge base question answering (KBQA) aims to answer a question over a knowledge base (KB). Early studies mainly focused on answering simple questions over KBs and achieved great success. However,Expand
Question answering over knowledge bases with continuous learning
TLDR
This dissertation introduces NEQA, a framework for continuous learning for QA over KBs, and presents QUINT, an approach for answering natural language questions over knowledge bases using automatically learned templates. Expand
A Survey on Complex Question Answering over Knowledge Base: Recent Advances and Challenges
TLDR
The recent advances in complex QA are introduced, the methods of these branches are described, directions for future research are analyzed and the models proposed by the Alime team are introduced. Expand
A Survey on Complex Knowledge Base Question Answering: Methods, Challenges and Solutions
TLDR
This paper elaborately summarize the typical challenges and solutions for complex KBQA, and presents the two mainstream categories of methods, namely semantic parsing-based (SP-based) methods and information retrieval-based [IR-based] methods. Expand
Answering Complex Questions by Combining Information from Curated and Extracted Knowledge Bases
TLDR
A novel KB-QA system, Multique, is presented, which can map a complex question to a complex query pattern using a sequence of simple queries each targeted at a specific KB. Expand
Answering Complex Open-domain Questions Through Iterative Query Generation
TLDR
This work presents GoldEn (Gold Entity) Retriever, which iterates between reading context and retrieving more supporting documents to answer open-domain multi-hop questions, and demonstrates that it outperforms the best previously published model despite not using pretrained language models such as BERT. Expand
Knowledge Base Question Answering via Encoding of Complex Query Graphs
TLDR
This work encoding such complex query structure into a uniform vector representation, and thus successfully capture the interactions between individual semantic components within a complex question, consistently outperforms existing methods on complex questions while staying competitive on simple questions. Expand
KQA Pro: A Large Diagnostic Dataset for Complex Question Answering over Knowledge Base
TLDR
This work introduces KQA Pro, a large-scale dataset for Complex KBQA, and generates questions, SPARQLs, and functional programs with recursive templates and paraphrase the questions by crowdsourcing, giving rise to around 120K diverse instances. Expand
Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text
TLDR
A novel model is proposed, GRAFT-Net, for extracting answers from a question-specific subgraph containing text and Knowledge Bases entities and relations that is competitive with the state-of-the-art when tested using either KBs or text alone, and vastly outperforms existing methods in the combined setting. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 45 REFERENCES
Evaluating Semantic Parsing against a Simple Web-based Question Answering Model
TLDR
This paper proposes to evaluate semantic parsing-based question answering models by comparing them to a question answering baseline that queries the web and extracts the answer only from web snippets, without access to the target knowledge-base, and finds that this approach achieves reasonable performance. Expand
Answering Complicated Question Intents Expressed in Decomposed Question Sequences
TLDR
This work collects a dataset of 6,066 question sequences that inquire about semi-structured tables from Wikipedia, with 17,553 question-answer pairs in total, and proposes strategies to handle questions that contain coreferences to previous questions or answers. Expand
Question Answering from Unstructured Text by Retrieval and Comprehension
TLDR
This work presents a two-step approach to question answering from unstructured text, consisting of a retrieval step and a comprehension step, featuring an RNN based attention model with a novel mixture mechanism for selecting answers from either retrieved articles or a fixed vocabulary. Expand
Semantic Parsing on Freebase from Question-Answer Pairs
TLDR
This paper trains a semantic parser that scales up to Freebase and outperforms their state-of-the-art parser on the dataset of Cai and Yates (2013), despite not having annotated logical forms. Expand
Compositional Semantic Parsing on Semi-Structured Tables
TLDR
This paper proposes a logical-form driven parsing algorithm guided by strong typing constraints and shows that it obtains significant improvements over natural baselines and is made publicly available. Expand
Constraint-Based Question Answering with Knowledge Graph
TLDR
A novel systematic KBQA approach to solve multi-constraint questions is proposed, which not only obtains comparable results on the two existing benchmark data-sets, but also achieves significant improvements on the ComplexQuestions. Expand
SQuAD: 100,000+ Questions for Machine Comprehension of Text
TLDR
A strong logistic regression model is built, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). Expand
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
TLDR
It is shown that, in comparison to other recently introduced large-scale datasets, TriviaQA has relatively complex, compositional questions, has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and requires more cross sentence reasoning to find answers. Expand
Ask the Right Questions: Active Question Reformulation with Reinforcement Learning
TLDR
This work proposes an agent that sits between the user and a black box QA system and learns to reformulate questions to elicit the best possible answers, and finds that successful question reformulations look quite different from natural language paraphrases. Expand
The Value of Semantic Parse Labeling for Knowledge Base Question Answering
TLDR
The value of collecting semantic parse labels for knowledge base question answering is demonstrated and the largest semantic-parse labeled dataset to date is created and shared in order to advance research in question answering. Expand
...
1
2
3
4
5
...