Corpus ID: 44160394

Automated proof synthesis for propositional logic with deep neural networks

@article{Sekiyama2018AutomatedPS,
  title={Automated proof synthesis for propositional logic with deep neural networks},
  author={Taro Sekiyama and Kohei Suenaga},
  journal={ArXiv},
  year={2018},
  volume={abs/1805.11799}
}
This work explores the application of deep learning, a machine learning technique that uses deep neural networks (DNN) in its core, to an automated theorem proving (ATP) problem. To this end, we construct a statistical model which quantifies the likelihood that a proof is indeed a correct one of a given proposition. Based on this model, we give a proof-synthesis procedure that searches for a proof in the order of the likelihood. This procedure uses an estimator of the likelihood of an inference… Expand
Automated Proof Synthesis for the Minimal Propositional Logic with Deep Neural Networks
TLDR
This work constructs a statistical model which quantifies the likelihood that a proof is indeed a correct one of a given proposition and proposes a proposition-to-proof architecture, which is a DNN tailored to the automated proof synthesis problem. Expand
Automated Theorem Proving in Intuitionistic Propositional Logic by Deep Reinforcement Learning
TLDR
This paper proposes a deep reinforcement learning algorithm for proof search in intuitionistic propositional logic and shows that its prover outperforms Coq's $\texttt{tauto}$ tactic, a prover based on human-engineered heuristics. Expand
Predicting Propositional Satisfiability via End-to-End Learning
TLDR
It is shown that end-to-end learning with deep networks can outperform previous work on random 3-SAT problems at the solubility phase transition, and empirical runtimes of known solution methods scale exponentially with problem size. Expand
Neural heuristics for SAT solving
We use neural graph networks with a message-passing architecture and an attention mechanism to enhance the branching heuristic in two SAT-solving algorithms. We report improvements of learned neuralExpand
SmartVerif: Push the Limit of Automation Capability of Verifying Security Protocols by Dynamic Strategies
TLDR
Experimental results show that SmartVerif can automatically verify all security protocols studied in this paper, and case studies also validate the efficiency of the dynamic strategy inside Smart Verif. Expand
Verifying Security Protocols using Dynamic Strategies
TLDR
Experimental results show that SmartVerif automatically verifies security protocols that cannot be automatically verified by existing approaches, and the case study validates the effectiveness of the dynamic strategy. Expand

References

SHOWING 1-10 OF 47 REFERENCES
Towards Proof Synthesis Guided by Neural Machine Translation for Intuitionistic Propositional Logic
TLDR
A proof-synthesis method for the negation-free propositional logic in which a DNN is used to obtain a guide of proof search, and empirically observes that a generated proof term is close to a correct proof in terms of the tree edit distance of AST. Expand
Deep Network Guided Proof Search
TLDR
Experimental evidence is given that with a hybrid, two-phase approach, deep learning based guidance can significantly reduce the average number of proof search steps while increasing the number of theorems proved. Expand
HolStep: A Machine Learning Dataset for Higher-order Logic Theorem Proving
TLDR
A new dataset based on Higher-Order Logic (HOL) proofs is introduced, for the purpose of developing new machine learning-based theorem-proving strategies and the results of these models show the promise of applying machine learning to HOL theorem proving. Expand
Neuro-Symbolic Program Synthesis
TLDR
This paper proposes a novel technique, Neuro-Symbolic Program Synthesis, that can automatically construct computer programs in a domain-specific language that are consistent with a set of input-output examples provided at test time and demonstrates the effectiveness of the approach by applying it to the rich and complex domain of regular expression based string transformations. Expand
RobustFill: Neural Program Learning under Noisy I/O
TLDR
This work directly compares both approaches for automatic program learning on a large-scale, real-world learning task and demonstrates that the strength of each approach is highly dependent on the evaluation metric and end-user application. Expand
DeepMath - Deep Sequence Models for Premise Selection
TLDR
A two stage approach is proposed that yields good results for the premise selection task on the Mizar corpus while avoiding the hand-engineered features of existing state-of-the-art models. Expand
DeepCoder: Learning to Write Programs
TLDR
The approach is to train a neural network to predict properties of the program that generated the outputs from the inputs to augment search techniques from the programming languages community, including enumerative search and an SMT-based solver. Expand
First-Order Logic and Automated Theorem Proving
  • M. Fitting
  • Mathematics, Computer Science
  • Graduate Texts in Computer Science
  • 1996
TLDR
This monograph on classical logic presents fundamental concepts and results in a rigorous mathematical style and is intended for those interested in computer science and mathematics at the beginning graduate level. Expand
Can Neural Networks Understand Logical Entailment?
TLDR
Results show that convolutional networks present the wrong inductive bias for this class of problems relative to LSTM RNNs, tree-structured neural networks outperform LSTS due to their enhanced ability to exploit the syntax of logic, and PossibleWorldNets outperform all benchmarks. Expand
Premise Selection for Theorem Proving by Deep Graph Embedding
We propose a deep learning-based approach to the problem of premise selection: selecting mathematical statements relevant for proving a given conjecture. We represent a higher-order logic formula asExpand
...
1
2
3
4
5
...