ProofWriter: Generating Implications, Proofs, and Abductive Statements over Natural Language

@article{Tafjord2021ProofWriterGI,
  title={ProofWriter: Generating Implications, Proofs, and Abductive Statements over Natural Language},
  author={Oyvind Tafjord and Bhavana Dalvi and Peter Clark},
  journal={ArXiv},
  year={2021},
  volume={abs/2012.13048}
}
Transformers have been shown to emulate logical deduction over natural language theories (logical rules expressed in natural language), reliably assigning true/false labels to candidate implications. However, their ability to generate implications of a theory has not yet been demonstrated, and methods for reconstructing proofs of answers are imperfect. In this work we show that a generative model, called ProofWriter, can reliably generate both implications of a theory and the natural language… Expand
NaturalProofs: Mathematical Theorem Proving in Natural Language
TLDR
This work develops NATURALPROOFS, a largescale dataset of mathematical statements and their proofs, written in natural mathematical language, and proposes a mathematical reference retrieval task that tests a system’s ability to determine the key results that appear in a proof. Expand
Learning Symbolic Rules for Reasoning in Quasi-Natural Language
  • Kaiyu Yang, Jia Deng
  • Computer Science
  • ArXiv
  • 2021
TLDR
This work proposes MetaQNL, a “Quasi-Natural” language that can express both formal logic and natural language sentences, and MetaInduce, a learning algorithm that inducesMetaQNL rules from training data consisting of questions and answers, with or without intermediate reasoning steps. Expand
A Generative Symbolic Model for More General Natural Language Understanding and Reasoning
TLDR
A new fully-symbolic Bayesian model of semantic parsing and reasoning is presented which is fully interpretable and Bayesian, designed specifically with generality in mind, and therefore provides a clearer path for future research to expand its capabilities. Expand
Explaining Answers with Entailment Trees
TLDR
ENTAILMENTBANK is created, the first dataset to contain multistep entailment trees, providing a new type of dataset (multistep entails) and baselines, offering a new avenue for the community to generate richer, more systematic explanations. Expand
DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models
TLDR
The empirical findings vindicate the overall framework and highlight the advantages of a modular design, in particular its ability to emulate established heuristics, to explore the model’s uncertainty, to cope with the plurality of correct solutions (underdetermination), and to exploit higher-order evidence. Expand
Flexible Generation of Natural Language Deductions
TLDR
PARAPATTERN is a method for building models to generate deductive inferences from diverse natural language inputs without direct human supervision, and achieves 85% validity on examples of the ‘substitution’ operation from EntailmentBank without the use of any in-domain training data. Expand
Neural Unification for Logic Reasoning over Natural Language
TLDR
This work proposes a new architecture, namely the Neural Unifier, and a relative training procedure, which achieves state-of-the-art results in term of generalisation, showing that mimicking a well-known inference procedure, the backward chaining, it is possible to answer deep queries even when the model is trained only on shallow ones. Expand
Reasoning with Transformer-based Models: Deep Learning, but Shallow Reasoning
  • 2021
Recent years have seen impressive performance of transformer-based models on different natural language processing tasks. However, it is not clear to what degree the transformers can reason onExpand
RuleBert: Teaching Soft Rules to Pre-trained Language Models
TLDR
This work introduces a classification task where, given facts and soft rules, thePLM should return a prediction with a probability for a given hypothesis, and proposes a revised loss function that enables the PLM to learn how to predict precise probabilities for the task. Expand

References

SHOWING 1-10 OF 29 REFERENCES
PRover: Proof Generation for Interpretable Reasoning over Rules
TLDR
This work proposes PROVER, an interpretable transformer-based model that jointly answers binary questions over rule-bases and generates the corresponding proofs, and learns to predict nodes and edges corresponding to proof graphs in an efficient constrained training paradigm. Expand
Transformers as Soft Reasoners over Language
TLDR
This work trains transformers to reason (or emulate reasoning) over natural language sentences using synthetically generated data, thus bypassing a formal representation and suggesting a new role for transformers, namely as limited "soft theorem provers" operating over explicit theories in language. Expand
Interpretation as Abduction
TLDR
An approach to abductive inference, called “weighted abduction”, that has resulted in a significant simplification of how the problem of interpreting texts is conceptualized, can be combined with the older view of “parsing as deduction” to produce an elegant and thorough integration of syntax, semantics, and pragmatics. Expand
Abductive Commonsense Reasoning
TLDR
This study introduces a challenge dataset, ART, that consists of over 20k commonsense narrative contexts and 200k explanations, and conceptualizes two new tasks -- Abductive NLI: a multiple-choice question answering task for choosing the more likely explanation, and Abduction NLG: a conditional generation task for explaining given observations in natural language. Expand
NLProlog: Reasoning with Weak Unification for Question Answering in Natural Language
TLDR
A model combining neural networks with logic programming in a novel manner for solving multi-hop reasoning tasks over natural language by using an Prolog prover to utilize a similarity function over pretrained sentence encoders and fine-tune the representations for the similarity function via backpropagation. Expand
e-SNLI: Natural Language Inference with Natural Language Explanations
TLDR
The Stanford Natural Language Inference dataset is extended with an additional layer of human-annotated natural language explanations of the entailment relations, which can be used for various goals, such as obtaining full sentence justifications of a model’s decisions, improving universal sentence representations and transferring to out-of-domain NLI datasets. Expand
End-to-end Differentiable Proving
TLDR
It is demonstrated that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules. Expand
Explain Yourself! Leveraging Language Models for Commonsense Reasoning
TLDR
This work collects human explanations for commonsense reasoning in the form of natural language sequences and highlighted annotations in a new dataset called Common Sense Explanations to train language models to automatically generate explanations that can be used during training and inference in a novel Commonsense Auto-Generated Explanation framework. Expand
Programs with common sense
Abstract : This paper discusses programs to manipulate in a suitable formal language (most likely a part of the predicate calculus) common instrumental statements. The basic program will drawExpand
Language Models as Knowledge Bases?
TLDR
An in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models finds that BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge. Expand
...
1
2
3
...