Property Invariant Embedding for Automated Reasoning
@inproceedings{Olsk2020PropertyIE, title={Property Invariant Embedding for Automated Reasoning}, author={Miroslav Ols{\'a}k and C. Kaliszyk and Josef Urban}, booktitle={ECAI}, year={2020} }
Automated reasoning and theorem proving have recently become major challenges for machine learning. In other domains, representations that are able to abstract over unimportant transformations, such as abstraction over translations and rotations in vision, are becoming more common. Standard methods of embedding mathematical formulas for learning theorem proving are however yet unable to handle many important transformations. In particular, embedding previously unseen labels, that often arise in…
26 Citations
A Study of Continuous Vector Representationsfor Theorem Proving
- Computer ScienceJ. Log. Comput.
- 2021
This paper develops an encoding that allows for logical properties to be preserved and is additionally reversible, which means that the tree shape of a formula including all symbols can be reconstructed from the dense vector representation.
An Experimental Study of Formula Embeddings for Automated Theorem Proving in First-Order Logic
- Computer ScienceArXiv
- 2020
This paper study and experimentally compare pattern-based embeddings that are applied in current systems with popular graph-based encodings, most of which have not been considered in the theorem proving context before, and presents a detailed analysis across several dimensions of theorem prover performance beyond just proof completion rate.
Improving Graph Neural Network Representations of Logical Formulae with Subgraph Pooling
- Computer ScienceArXiv
- 2019
This work proposes a novel approach for embedding logical formulae that is designed to overcome the representational limitations of prior approaches and achieves state-of-the-art performance on both premise selection and proof step classification.
Exploring Representation of Horn Clauses using GNNs
- Computer Science
- 2022
This work considers Constrained Horn Clauses (CHCs) as a standard representation of program verification problems, and proposes a new Relational Hypergraph Neural Network (R-HyGNN) architecture, extending Relational Graph Convolutional Networks, to handle hypergraphs.
Adversarial Learning to Reason in an Arbitrary Logic
- Computer ScienceFLAIRS Conference
- 2022
This work proposes Monte-Carlo simulations guided by reinforcement learning that can work in an arbitrarily specified logic, without any human knowledge or set of problems, and practically demonstrates the feasibility of the approach in multiple logical systems.
The Role of Entropy in Guiding a Connection Prover
- Computer ScienceTABLEAUX
- 2021
This work starts by incorporating a state-of-the-art learning algorithm – a graph neural network (GNN) – into the plCoP theorem prover, and shows that a proper entropy regularization, i.e., training the GNN not to be overconfident, greatly improves pl coP’s performance on a large mathematical corpus.
ENIGMA Anonymous: Symbol-Independent Inference Guiding Machine (System Description)
- Computer ScienceIJCAR
- 2020
An implementation of gradient boosting and neural guidance of saturation-style automated theorem provers that does not depend on consistent symbol names across problems is described and is evaluated on the MPTP large-theory benchmark and shown to achieve comparable real-time performance to state-of-the-art symbol-based methods.
A Deep Reinforcement Learning Approach to First-Order Logic Theorem Proving
- Computer ScienceAAAI
- 2021
TRAIL is introduced, a system that applies deep reinforcement learning to saturation-based theorem proving and leverages a novel neural representation of the state of a theorem prover and a novel characterization of the inference selection process in terms of an attention-based action policy.
A Deep Reinforcement Learning based Approach to Learning Transferable Proof Guidance Strategies
- Computer ScienceArXiv
- 2019
It is shown that TRAIL's learned strategies provide a comparable performance to an established heuristics-based theorem prover, suggesting that the neural architecture in TRAIL is well suited for representing and processing of logical formalisms.
Learning Theorem Proving Components
- Computer ScienceTABLEAUX
- 2021
This work describes several algorithms and experiments with ENIGMA, advancing the idea of contextual evaluation based on learning important components of the graph of clauses, and equipping the E/ENIGMA system with a graph neural network that chooses the next given clause based on its evaluation in the context of previously selected clauses.
References
SHOWING 1-10 OF 31 REFERENCES
Efficient Semantic Features for Automated Reasoning over Large Theories
- Computer ScienceIJCAI
- 2015
This work proposes novel semantic features characterizing the statements in such large semantic knowledge bases and carries out their efficient implementation using deductive-AI data-structures such as substitution trees and discrimination nets, and shows that they significantly improve the strength of existing knowledge selection methods and automated reasoning methods over the large formal knowledge bases.
Learning search control knowledge for equational deduction
- Computer ScienceDISKI
- 2000
This thesis develops techniques to automatically learn good search heuristics to control the proof search of a superposition-based theorem prover for clausal logic with equality and describes a variant of the superposition calculus and an efficient proof procedure implementing this calculus.
HolStep: A Machine Learning Dataset for Higher-order Logic Theorem Proving
- Computer ScienceICLR
- 2017
A new dataset based on Higher-Order Logic (HOL) proofs is introduced, for the purpose of developing new machine learning-based theorem-proving strategies and the results of these models show the promise of applying machine learning to HOL theorem proving.
Premise Selection for Theorem Proving by Deep Graph Embedding
- Computer ScienceNIPS
- 2017
We propose a deep learning-based approach to the problem of premise selection: selecting mathematical statements relevant for proving a given conjecture. We represent a higher-order logic formula as…
Learning Continuous Semantic Representations of Symbolic Expressions
- Computer ScienceICML
- 2017
An exhaustive evaluation on the task of checking equivalence on a highly diverse class of symbolic algebraic and boolean expression types is performed, showing that the proposed neural equivalence networks model significantly outperforms existing architectures.
Deep Network Guided Proof Search
- Computer ScienceLPAR
- 2017
Experimental evidence is given that with a hybrid, two-phase approach, deep learning based guidance can significantly reduce the average number of proof search steps while increasing the number of theorems proved.
Reinforcement Learning of Theorem Proving
- Computer ScienceNeurIPS
- 2018
A theorem proving algorithm that uses practically no domain heuristics for guiding its connection-style proof search and solves within the same number of inferences over 40% more problems than a baseline prover, which is an unusually high improvement in this hard AI domain.
Premise selection with neural networks and distributed representation of features
- Computer ScienceArXiv
- 2018
This work presents the problem of selecting relevant premises for a proof of a given statement, and uses dimensionality reduction technique, to replace long and sparse signature vectors with their compact and dense embedded versions.
Premise Selection and External Provers for HOL4
- Computer ScienceCPP
- 2015
An add-on to the HOL4 proof assistant and an adaptation of the HOL(y)Hammer system that provides machine learning-based premise selection and automated reasoning also for HOL4, which directly benefits HOL4 users by automatically finding proofs dependencies that can be reconstructed by Metis.