PRover: Proof Generation for Interpretable Reasoning over Rules

@article{Saha2020PRoverPG,
  title={PRover: Proof Generation for Interpretable Reasoning over Rules},
  author={Swarnadeep Saha and Sayan Ghosh and Shashank Srivastava and M. Bansal},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.02830}
}
Recent work by Clark et al. (2020) shows that transformers can act as 'soft theorem provers' by answering questions over explicitly provided knowledge in natural language. In our work, we take a step closer to emulating formal theorem provers, by proposing PROVER, an interpretable transformer-based model that jointly answers binary questions over rule-bases and generates the corresponding proofs. Our model learns to predict nodes and edges corresponding to proof graphs in an efficient… Expand
multiPRover: Generating Multiple Proofs for Improved Interpretability in Rule Reasoning
TLDR
This work addresses a new and challenging problem of generating multiple proof graphs for reasoning over natural language rule-bases by proposing two variants of a proof-set generation model, multiPRover and Iterative-multiPRover. Expand
Probabilistic Graph Reasoning for Natural Proof Generation
TLDR
This paper proposes PROBR, a novel approach for joint answer prediction and proof generation via an induced graphical model that defines a joint probabilistic distribution over all possible proof graphs and answers via an inducing graphical model. Expand
Flexible Generation of Natural Language Deductions
An interpretable system for open-domain reasoning needs to express its reasoning process in a transparent form. Natural language is an attractive representation for this purpose — it is both highlyExpand
Flexible Operations for Natural Language Deduction
TLDR
This paper uses a BART-based model to generate the result of applying a particular logical operation to one or more premise statements, and has a largely automated pipeline for scraping and constructing suitable training examples from Wikipedia, which are then paraphrased to give the models the ability to handle lexical variation. Expand
Explainable Multi-hop Verbal Reasoning Through Internal Monologue
TLDR
This work implements the Explainable multi-hop Verbal Reasoner (EVR) by extending the classic reasoning paradigm General Problem Solver with a SOTA generative language model to generate subgoals and perform inference in natural language at each reasoning step. Expand
ExplaGraphs: An Explanation Graph Generation Task for Structured Commonsense Reasoning
TLDR
This work presents EXPLAGRAPHS, a new generative and structured commonsense-reasoning task (and an associated dataset) of explanation graph generation for stance prediction and proposes a multi-level evaluation framework that checks for the structural and semantic correctness of the generated graphs and their plausibility with humanwritten graphs. Expand
Reasoning with Transformer-based Models: Deep Learning, but Shallow Reasoning
  • 2021
Recent years have seen impressive performance of transformer-based models on different natural language processing tasks. However, it is not clear to what degree the transformers can reason onExpand
Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning Performance of GPT-2
Thinking aloud is an effective meta-cognitive strategy human reasoners apply to solve difficult problems. We suggest to improve the reasoning ability of pre-trained neural language models in aExpand
Explaining the Road Not Taken
TLDR
This paper summarizes the common forms of explanations used in over 200 recent papers about natural language processing (NLP), and compares them against user questions collected in the XAI Question Bank, and finds that most model interpretations cannot answer these questions. Expand
Explaining Answers with Entailment Trees
TLDR
ENTAILMENTBANK is created, the first dataset to contain multistep entailment trees, providing a new type of dataset (multistep entails) and baselines, offering a new avenue for the community to generate richer, more systematic explanations. Expand
...
1
2
...

References

SHOWING 1-10 OF 46 REFERENCES
Transformers as Soft Reasoners over Language
TLDR
This work trains transformers to reason (or emulate reasoning) over natural language sentences using synthetically generated data, thus bypassing a formal representation and suggesting a new role for transformers, namely as limited "soft theorem provers" operating over explicit theories in language. Expand
NLProlog: Reasoning with Weak Unification for Question Answering in Natural Language
TLDR
A model combining neural networks with logic programming in a novel manner for solving multi-hop reasoning tasks over natural language by using an Prolog prover to utilize a similarity function over pretrained sentence encoders and fine-tune the representations for the similarity function via backpropagation. Expand
An Experimental Study of Formula Embeddings for Automated Theorem Proving in First-Order Logic
TLDR
This paper study and experimentally compare pattern-based embeddings that are applied in current systems with popular graph-based encodings, most of which have not been considered in the theorem proving context before, and presents a detailed analysis across several dimensions of theorem prover performance beyond just proof completion rate. Expand
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
TLDR
This work argues for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering, and classify these tasks into skill sets so that researchers can identify (and then rectify) the failings of their systems. Expand
Probing Natural Language Inference Models through Semantic Fragments
TLDR
This work proposes the use of semantic fragments---systematically generated datasets that each target a different semantic phenomenon---for probing, and efficiently improving, such capabilities of linguistic models. Expand
Explain Yourself! Leveraging Language Models for Commonsense Reasoning
TLDR
This work collects human explanations for commonsense reasoning in the form of natural language sequences and highlighted annotations in a new dataset called Common Sense Explanations to train language models to automatically generate explanations that can be used during training and inference in a novel Commonsense Auto-Generated Explanation framework. Expand
e-SNLI: Natural Language Inference with Natural Language Explanations
TLDR
The Stanford Natural Language Inference dataset is extended with an additional layer of human-annotated natural language explanations of the entailment relations, which can be used for various goals, such as obtaining full sentence justifications of a model’s decisions, improving universal sentence representations and transferring to out-of-domain NLI datasets. Expand
Natural Logic and Natural Language Inference
TLDR
A model of natural language inference which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation is proposed, extending past work in natural logic by incorporating both semantic exclusion and implicativity. Expand
Semantic Parsing on Freebase from Question-Answer Pairs
TLDR
This paper trains a semantic parser that scales up to Freebase and outperforms their state-of-the-art parser on the dataset of Cai and Yates (2013), despite not having annotated logical forms. Expand
HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering
TLDR
It is shown that HotpotQA is challenging for the latest QA systems, and the supporting facts enable models to improve performance and make explainable predictions. Expand
...
1
2
3
4
5
...