Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations

  title={Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations},
  author={Jaehun Jung and Lianhui Qin and Sean Welleck and Faeze Brahman and Chandra Bhagavatula and Ronan Le Bras and Yejin Choi},
Despite their impressive capabilities, large pretrained language models (LMs) struggle with consistent reasoning; recently, prompting LMs to generate explanations that self-guide the inference has emerged as a promising direction to amend this. However, these approaches are fundamentally bounded by the correctness of explanations, which themselves are often noisy and inconsistent. In this work, we de-velop M AIEUTIC PROMPTING , which infers a correct answer to a question even from the noisy and… 

Prompting as Probing: Using Language Models for Knowledge Base Construction

ProP (Prompting as Probing), which utilizes GPT-3, a large Language Model originally proposed by OpenAI in 2020, to perform the task of Knowledge Base Construction (KBC), implements a multi-step approach that combines a variety of prompting techniques to achieve this.

Rationale-Augmented Ensembles in Language Models

It is demonstrated that rationale-augmented ensembles achieve more accurate and interpretable results than existing prompting approaches—including standard prompting without rationales and rationale-based chain-of-thought prompting—while simultaneously improving interpretability of model predictions through the associated rationales.



The Unreliability of Explanations in Few-Shot In-Context Learning

A framework for calibrating model predictions based on the reliability of explanations is presented and it is shown that explanations judged as good by humans—those that are logically consistent with the input and the prediction—usually indicate more accurate predictions.

Towards Teachable Reasoning Systems

Generated chains of reasoning show how answers are implied by the system’s own internal beliefs, and are both faithful and truthful, which suggests new opportunities for using language models in an interactive setting where users can inspect, debug, correct, and improve a system‘s performance over time.

BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief

This work describes two mechanisms to improve belief consistency in the overall system, enabling PTLM-based architectures with a systematic notion of belief to construct a more coherent picture of the world, and improve over time without model retraining.

Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision

This paper investigates multiple ways to automatically generate rationales using pre-trained language models, neural knowledge models, and distant supervision from related tasks, and trains generative models capable of composing explanatory rationales for unseen instances.

Flexible Generation of Natural Language Deductions

ParaPattern is described, a method for building models to generate deductive inferences from diverse natural language inputs without direct human supervision that achieves 85% validity on examples of the ‘substitution’ operation from EntailmentBank without the use of any in-domain training data.

Improving Coherence and Consistency in Neural Sequence Models with Dual-System, Neuro-Symbolic Reasoning

This work seeks a lightweight, training-free means of improving existing System 1-like sequence models by adding System 2-inspired logical reasoning and shows that this approach can increase the coherence and accuracy of neurally-based generations.

Self-Consistency Improves Chain of Thought Reasoning in Language Models

A simple ensemble strategy, self-consistency, that robustly improves accuracy across a variety of language models and model scales without the need for additional training or auxiliary models is explored.

Generated Knowledge Prompting for Commonsense Reasoning

Generated knowledge prompting develops generated knowledge prompting, which consists of generating knowledge from a language model, then providing the knowledge as additional input when answering a question, and improves performance of large-scale, state-of-the-art models on four commonsense reasoning tasks.

Can language models learn from explanations in context?

Investigating whether explanations of few-shot examples can allow language models to adapt more efitively and showing that explanations of examples can improve performance shows that explanations can support the in-context learning abilities of large language models on challenging tasks.

e-SNLI: Natural Language Inference with Natural Language Explanations

The Stanford Natural Language Inference dataset is extended with an additional layer of human-annotated natural language explanations of the entailment relations, which can be used for various goals, such as obtaining full sentence justifications of a model’s decisions, improving universal sentence representations and transferring to out-of-domain NLI datasets.