• Corpus ID: 239016203

Ranking Facts for Explaining Answers to Elementary Science Questions

@article{DSouza2021RankingFF,
  title={Ranking Facts for Explaining Answers to Elementary Science Questions},
  author={Jennifer D’Souza and Isaiah Onando Mulang and Soeren Auer},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.09036}
}
In multiple-choice exams, students select one answer from among typically four choices and can explain why they made that particular choice. Students are good at understanding natural language questions and based on their domain knowledge can easily infer the question’s answer by ‘connecting the dots’ across various pertinent facts. Considering automated reasoning for elementary science question answering (Clark et al. 2018), we address the novel task of generating explanations for answers from… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 40 REFERENCES
WorldTree: A Corpus of Explanation Graphs for Elementary Science Questions supporting Multi-Hop Inference
TLDR
A corpus of explanations for standardized science exams, a recent challenge task for question answering, is presented and an explanation-centered tablestore is provided, a collection of semi-structured tables that contain the knowledge to construct these elementary science explanations.
Framing QA as Building and Ranking Intersentence Answer Justifications
TLDR
A question answering approach for standardized science exams that both identifies correct answers and produces compelling human-readable justifications for why those answers are correct is proposed, and it is shown that information aggregation is key to addressing the information need in complex questions.
What’s in an Explanation? Characterizing Knowledge and Inference Requirements for Elementary Science Exams
TLDR
This work develops an explanation-based analysis of knowledge and inference requirements, which supports a fine-grained characterization of the challenges, and compares a retrieval and an inference solver on 212 questions.
TextGraphs 2019 Shared Task on Multi-Hop Inference for Explanation Regeneration
TLDR
The Shared Task on Multi-Hop Inference for Explanation Regeneration tasks participants with regenerating detailed gold explanations for standardized elementary science exam questions by selecting facts from a knowledge base of semi-structured tables.
QASC: A Dataset for Question Answering via Sentence Composition
TLDR
This work presents a multi-hop reasoning dataset, Question Answering via Sentence Composition (QASC), that requires retrieving facts from a large corpus and composing them to answer a multiple-choice question, and presents a two-step approach to mitigate the retrieval challenges.
Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge
TLDR
A new question set, text corpus, and baselines assembled to encourage AI research in advanced question answering constitute the AI2 Reasoning Challenge (ARC), which requires far more powerful knowledge and reasoning than previous challenges such as SQuAD or SNLI.
Declarative Question Answering over Knowledge Bases containing Natural Language Text with Answer Set Programming
TLDR
The proposed method uses recent features of Answer Set Programming (ASP) to call external NLP modules (which may be based on ML) which perform simple textual entailment which achieves up to $18\% performance gain when compared to standard MCQ solvers.
Higher-order Lexical Semantic Models for Non-factoid Answer Reranking
TLDR
This work introduces a higher-order formalism that allows all these lexical semantic models to chain direct evidence to construct indirect associations between question and answer texts, by casting the task as the traversal of graphs that encode direct term associations.
Answering questions by learning to rank - Learning to rank by answering questions
TLDR
This article describes a method which can be used to semantically rank documents extracted from Wikipedia or similar natural language corpora and proposes a model employing the semantic ranking that holds the first place in two of the most popular leaderboards for answering multiple-choice questions.
Commonsense for Generative Multi-Hop Question Answering Tasks
TLDR
This work focuses on a more challenging multi-hop generative task (NarrativeQA), which requires the model to reason, gather, and synthesize disjoint pieces of information within the context to generate an answer.
...
1
2
3
4
...