Entailer: Answering Questions with Faithful and Truthful Chains of Reasoning

@article{Tafjord2022EntailerAQ,
  title={Entailer: Answering Questions with Faithful and Truthful Chains of Reasoning},
  author={Oyvind Tafjord and Bhavana Dalvi and Peter Clark},
  journal={ArXiv},
  year={2022},
  volume={abs/2210.12217}
}
Our goal is a question-answering (QA) system that can show how its answers are implied by its own internal beliefs via a systematic chain of reasoning . Such a capability would allow better understanding of why a model produced the answer it did. Our approach is to recursively combine a trained backward-chaining model, capable of generating a set of premises entailing an answer hypothesis, with a verifier that checks that the model itself believes those premises (and the entailment itself… 

Figures and Tables from this paper

Towards Teachable Reasoning Systems: Using a Dynamic Memory of User Feedback for Continual System Improvement

The approach is to augment a QA model with a dynamic memory of user feedback, containing user-supplied corrections to erroneous model beliefs that users identify during interaction, leading to improved system’s performance over time.

References

SHOWING 1-10 OF 38 REFERENCES

Explaining Answers with Entailment Trees

ENTAILMENTBANK is created, the first dataset to contain multistep entailment trees, providing a new type of dataset (multistep entails) and baselines, offering a new avenue for the community to generate richer, more systematic explanations.

Natural Language Deduction through Search over Statement Compositions

This work proposes a system for doing generative deductive reasoning in natural language by decomposing the task into separate steps coordinated by a search procedure, producing a tree of intermediate conclusions that faithfully reflects the system’s reasoning process.

METGEN: A Module-Based Entailment Tree Generation Framework for Answer Explanation

A Module-based Entailment Tree GENeration framework that has multiple modules and a reasoning controller that can outperform previous state-of-the-art models with only 9% of the parameters.

Entailment Tree Explanations via Iterative Retrieval-Generation Reasoner

This work proposes an architecture called Iterative Retrieval-Generation Reasoner (IRGR), able to explain a given hypothesis by systematically generating a step-by-step explanation from textual premises, allowing the model to leverage intermediate conclusions, and mitigating the input size limit of baseline encoder-decoder models.

BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief

This work describes two mechanisms to improve belief consistency in the overall system, enabling PTLM-based architectures with a systematic notion of belief to construct a more coherent picture of the world, and improve over time without model retraining.

ProofWriter: Generating Implications, Proofs, and Abductive Statements over Natural Language

This work shows that a generative model, called ProofWriter, can reliably generate both implications of a theory and the natural language proofs that support them, and shows that generative techniques can perform a type of abduction with high precision.

WorldTree V2: A Corpus of Science-Domain Structured Explanations and Inference Patterns supporting Multi-Hop Inference

This work presents the second iteration of the WorldTree project, a corpus of 5,114 standardized science exam questions paired with large detailed multi-fact explanations that combine core scientific knowledge and world knowledge, and uses this explanation corpus to author a set of 344 high-level science domain inference patterns similar to semantic frames supporting multi-hop inference.

Generating Natural Language Proofs with Verifier-Guided Search

A novel stepwise method, NLProofS (Natural Language Proof Search), which learns to generate relevant steps conditioning on the hypothesis, which improves the correctness of predicted proofs from 27.7% to 33.3% in the distractor setting of EntailmentBank, demonstrating the effectiveness of NL proofS in generating challenging human-authored proofs.

Transformers as Soft Reasoners over Language

This work trains transformers to reason (or emulate reasoning) over natural language sentences using synthetically generated data, thus bypassing a formal representation and suggesting a new role for transformers, namely as limited "soft theorem provers" operating over explicit theories in language.

Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning

A Selection-Inference (SI) framework is proposed that exploits pre-trained LLMs as general processing modules, and alternates between selection and inference to generate a series of interpretable, casual reasoning steps leading to the final answer.