• Corpus ID: 44098100

KG^2: Learning to Reason Science Exam Questions with Contextual Knowledge Graph Embeddings

@article{Zhang2018KG2LT,
  title={KG^2: Learning to Reason Science Exam Questions with Contextual Knowledge Graph Embeddings},
  author={Yuyu Zhang and Hanjun Dai and Kamil Toraman and Le Song},
  journal={ArXiv},
  year={2018},
  volume={abs/1805.12393}
}
The AI2 Reasoning Challenge (ARC), a new benchmark dataset for question answering (QA) has been recently released. ARC only contains natural science questions authored for human exams, which are hard to answer and require advanced logic reasoning. On the ARC Challenge Set, existing state-of-the-art QA systems fail to significantly outperform random baseline, reflecting the difficult nature of this task. In this paper, we propose a novel framework for answering science exam questions, which… 

Figures and Tables from this paper

ActKnow: Active External Knowledge Infusion Learning for Question Answering in Low Data Regime

TLDR
This work proposes a technique called ActKnow that actively infuses knowledge from Knowledge Graphs (KG) based ”on-demand” into learning for Question Answering (QA) by infusing world knowledge from Concept-Net and shows significant improvements on the ARC Challenge-set benchmark.

Answering Science Exam Questions Using Query Rewriting with Background Knowledge

TLDR
A system that rewrites a given question into queries that are used to retrieve supporting text from a large corpus of science-related text is presented and is able to outperform several strong baselines on the ARC dataset.

Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question Answering

TLDR
A novel knowledge-aware approach that equips pre-trained language models (PTLMs) with a multi-hop relational reasoning module, namedmulti-hop graph relation network (MHGRN), which performs multi-Hop, multi-relational reasoning over subgraphs extracted from external knowledge graphs.

Learning Contextualized Knowledge Structures for Commonsense Reasoning

TLDR
A novel neural-symbolic model is presented, named Hybrid Graph Network (HGN), which jointly generates feature representations for new triples, determines the relevance of the triples to the reasoning context, and learns graph module parameters for encoding the relational information.

Improving Retrieval-Based Question Answering with Deep Inference Models

TLDR
This proposed two-step model outperforms the best retrieval-based solver by over 3% in absolute accuracy and can answer both simple, factoid questions and more complex questions that require reasoning or inference.

Improving Question Answering by Commonsense-Based Pre-Training

TLDR
Results show that incorporating commonsense-based function improves the baseline on three question answering tasks that require commonsense reasoning and leverages useful evidence from an external commonsense knowledge base, which is missing in existing neural network models.

Relation-aware Bidirectional Path Reasoning for Commonsense Question Answering

TLDR
This work proposes a relation-aware reasoning method that dynamically updates relations with contextual information from a multi-source subgraph, built from multiple external knowledge sources.

Careful Selection of Knowledge to Solve Open Book Question Answering

TLDR
This paper addresses QA with respect to the OpenBookQA dataset and combines state of the art language models with abductive information retrieval (IR), information gain based re-ranking, passage selection and weighted scoring to achieve 72.0% accuracy.

LEARNING CONTEXTUALIZED KNOWLEDGE STRUC-

TLDR
A novel neural-symbolic model is presented, named Hybrid Graph Network (HGN), which jointly generates feature representations for new triples, determines the relevance of the triples to the reasoning context, and learns graph module parameters for encoding the relational information.

Learning to Attend On Essential Terms: An Enhanced Retriever-Reader Model for Open-domain Question Answering

TLDR
This paper proposes a retriever-reader model that learns to attend on essential terms during the question answering process, and builds an essential term selector which first identifies the most important words in a question, then reformulates the query and searches for related evidence.

References

SHOWING 1-10 OF 22 REFERENCES

Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge

TLDR
A new question set, text corpus, and baselines assembled to encourage AI research in advanced question answering constitute the AI2 Reasoning Challenge (ARC), which requires far more powerful knowledge and reasoning than previous challenges such as SQuAD or SNLI.

Question Answering via Integer Programming over Semi-Structured Knowledge

TLDR
This work proposes a structured inference system for this task, formulated as an Integer Linear Program (ILP), that answers natural language questions using a semi-structured knowledge base derived from text, including questions requiring multi-step inference and a combination of multiple facts.

Answering Complex Questions Using Open Information Extraction

TLDR
This work develops a new inference model for Open IE that can work effectively with multiple short facts, noise, and the relational structure of tuples, and significantly outperforms a state-of-the-art structured solver on complex questions of varying difficulty.

SQuAD: 100,000+ Questions for Machine Comprehension of Text

TLDR
A strong logistic regression model is built, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%).

SciTaiL: A Textual Entailment Dataset from Science Question Answering

TLDR
A new dataset and model for textual entailment, derived from treating multiple-choice question-answering as an entailment problem, is presented, and it is demonstrated that one can improve accuracy on SCITAIL by 5% using a new neural model that exploits linguistic structure.

What’s in an Explanation? Characterizing Knowledge and Inference Requirements for Elementary Science Exams

TLDR
This work develops an explanation-based analysis of knowledge and inference requirements, which supports a fine-grained characterization of the challenges, and compares a retrieval and an inference solver on 212 questions.

Markov Logic Networks for Natural Language Question Answering

TLDR
The experiments, demonstrating a 15\% accuracy boost and a 10x reduction in runtime, suggest that the flexibility and different inference semantics of Praline are a better fit for the natural language question answering task.

Bidirectional Attention Flow for Machine Comprehension

TLDR
The BIDAF network is introduced, a multi-stage hierarchical process that represents the context at different levels of granularity and uses bi-directional attention flow mechanism to obtain a query-aware context representation without early summarization.

GAKE: Graph Aware Knowledge Embedding

TLDR
This paper proposes a graph aware knowledge embedding method (GAKE), which formulates knowledge base as a directed graph, and learns representations for any vertices or edges by leveraging the graph’s structural information.

Combining Retrieval, Statistics, and Inference to Answer Elementary Science Questions

TLDR
This paper describes an alternative approach that operates at three levels of representation and reasoning: information retrieval, corpus statistics, and simple inference over a semi-automatically constructed knowledge base, to achieve substantially improved results.