Declarative Question Answering over Knowledge Bases containing Natural Language Text with Answer Set Programming

@article{Mitra2019DeclarativeQA,
  title={Declarative Question Answering over Knowledge Bases containing Natural Language Text with Answer Set Programming},
  author={Arindam Mitra and Peter Clark and Oyvind Tafjord and Chitta Baral},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.00198}
}
While in recent years machine learning (ML) based approaches have been the popular approach in developing end-to-end question answering systems, such systems often struggle when additional knowledge is needed to correctly answer the questions. [] Key Method The proposed method uses recent features of Answer Set Programming (ASP) to call external NLP modules (which may be based on ML) which perform simple textual entailment. To test our approach we develop a corpus based on the life cycle questions and showed…

Tables from this paper

Careful Selection of Knowledge to Solve Open Book Question Answering
TLDR
This paper addresses QA with respect to the OpenBookQA dataset and combines state of the art language models with abductive information retrieval (IR), information gain based re-ranking, passage selection and weighted scoring to achieve 72.0% accuracy.
Natural Language QA Approaches using Reasoning with External Knowledge
TLDR
A survey of the recent work on the traditional fields of knowledge representation and reasoning and the field of NL understanding and NLQA is presented to help establish a bridge between multiple fields of AI.
Deeply Embedded Knowledge Representation & Reasoning For Natural Language Question Answering: A Practitioner’s Perspective
TLDR
Deeply Embedded Knowledge Representation & Reasoning (DeepEKR) is proposed where the parser is replaced by a neural network, the symbolic representation is softened, a deterministic mapping exists between the parser neural network and the interpretable logical form, and the symbolic solver is replace by an equivalent neural network so the model can be trained end-to-end.
Ranking Facts for Explaining Answers to Elementary Science Questions
TLDR
Considering automated reasoning for elementary science question answering, this work addresses the novel task of generating explanations for answers from human-authored facts using a practically scalable framework of feature-rich support vector machines leveraging domain-targeted, hand-crafted features.
A Generate-Validate Approach to Answering Questions about Qualitative Relationships
TLDR
This paper shows that instead of using a semantic parser to produce the logical form, if a generate-validate framework is applied, if the natural language description is followed from the input text, there is a better scope for transfer learning and the method outperforms the state-of-the-art by a large margin.
Enhancing Natural Language Inference Using New and Expanded Training Data Sets and New Learning Models
TLDR
A modification to the “word-to-word” attention function which has been uniformly reused across several popular NLI architectures is proposed and the resulting models perform as well as their unmodified counterparts on the existing benchmarks and perform significantly well on the new benchmarks that emphasize “roles” and “entities”.
Zero-Shot Open-Book Question Answering
TLDR
A solution for answering natural language questions from a corpus of Amazon Web Services (AWS) technical documents with no domain-specific labeled data (zero-shot) and attempts to find the yes-no-none answers and text answers in the same pass.
Bug Question Answering with Pretrained Encoders
  • Lili Bo, Jinting Lu
  • Computer Science
    2021 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)
  • 2021
TLDR
A novel bug question answering approach named BERT-BugQA that takes advantage of the Bidirectional Encoder Representations from Transformers (BERT) which can fully consider the bidirectional context of bug information.
Natural Language Generation for Non-Expert Users
TLDR
The paper includes a use case evaluation showing that the proposed system could also be utilized in addressing a challenge to create an abstract Wikipedia, which was recently discussed in the BlueSky session of the 2018 International Semantic Web Conference.
Interactive Text Graph Mining with a Prolog-based Dialog Engine
TLDR
A Prolog-based dialog engine that explores interactively a ranked fact database extracted from a text document and reorganizes dependency graphs to focus on the most relevant content elements of a sentence and integrate sentence identifiers as graph nodes is designed.
...
...

References

SHOWING 1-10 OF 38 REFERENCES
A Logic Based Approach to Answering Questions about Alternatives in DIY Domains
TLDR
A question answering system which aims at answering nonfactoid questions in the DIY domain using logic-based reasoning that uses Answer Set Programming to derive an answer by combining various types of knowledge such as domain and commonsense knowledge.
COGEX: A Logic Prover for Question Answering
TLDR
The idea of automated reasoning applied to question answering is introduced and the feasibility of integrating a logic prover into a Question Answering system is shown.
Addressing a Question Answering Challenge by Combining Statistical Methods with Inductive Rule Learning and Reasoning
TLDR
This work presents a system that excels at all the tasks except one and demonstrates that the introduction of a reasoning module significantly improves the performance of an intelligent agent.
Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge
TLDR
A new question set, text corpus, and baselines assembled to encourage AI research in advanced question answering constitute the AI2 Reasoning Challenge (ARC), which requires far more powerful knowledge and reasoning than previous challenges such as SQuAD or SNLI.
SciTaiL: A Textual Entailment Dataset from Science Question Answering
TLDR
A new dataset and model for textual entailment, derived from treating multiple-choice question-answering as an entailment problem, is presented, and it is demonstrated that one can improve accuracy on SCITAIL by 5% using a new neural model that exploits linguistic structure.
A large annotated corpus for learning natural language inference
TLDR
The Stanford Natural Language Inference corpus is introduced, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning, which allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.
Learning to Parse Database Queries Using Inductive Logic Programming
TLDR
Experimental results with a complete database-query application for U.S. geography show that CHILL is able to learn parsers that outperform a preexisting, hand-crafted counterpart, and provide direct evidence of the utility of an empirical approach at the level of a complete natural language application.
Towards Addressing the Winograd Schema Challenge - Building and Using a Semantic Parser and a Knowledge Hunting Module
TLDR
This paper presents an approach that identifies the knowledge needed to answer a challenge question, hunts down that knowledge from text repositories, and then reasons with machines to come up with the answer.
SQuAD: 100,000+ Questions for Machine Comprehension of Text
TLDR
A strong logistic regression model is built, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%).
Neural Semantic Parsing with Type Constraints for Semi-Structured Tables
TLDR
A new semantic parsing model for answering compositional questions on semi-structured Wikipedia tables with a state-of-the-art accuracy and type constraints and entity linking are valuable components to incorporate in neural semantic parsers.
...
...