Corpus ID: 49903958

Learning Heuristics for Automated Reasoning through Deep Reinforcement Learning

@article{Lederman2018LearningHF,
  title={Learning Heuristics for Automated Reasoning through Deep Reinforcement Learning},
  author={Gil Lederman and Markus N. Rabe and S. Seshia},
  journal={ArXiv},
  year={2018},
  volume={abs/1807.08058}
}
We demonstrate how to learn efficient heuristics for automated reasoning algorithms through deep reinforcement learning. We consider search algorithms for quantified Boolean logics, that already can solve formulas of impressive size - up to 100s of thousands of variables. The main challenge is to find a representation which lends to making predictions in a scalable way. The heuristics learned through our approach significantly improve over the handwritten heuristics for several sets of formulas… Expand
Learning Local Search Heuristics for Boolean Satisfiability
TLDR
A graph neural network is incorporated in a stochastic local search algorithm to act as the variable selection heuristic and it is demonstrated that the learned heuristics allow us to find satisfying assignments in fewer steps compared to a generic heuristic. Expand
Automated Theorem Proving in Intuitionistic Propositional Logic by Deep Reinforcement Learning
TLDR
This paper proposes a deep reinforcement learning algorithm for proof search in intuitionistic propositional logic and shows that its prover outperforms Coq's $\texttt{tauto}$ tactic, a prover based on human-engineered heuristics. Expand
Learning Clause Deletion Heuristics with Reinforcement Learning
We propose a method for training of clause deletion heuristics in DPLL-based solvers using Reinforcement Learning. We have implemented it as part of a software framework SAT-Gym which we plan toExpand
A Deep Reinforcement Learning based Approach to Learning Transferable Proof Guidance Strategies
TLDR
It is shown that TRAIL's learned strategies provide a comparable performance to an established heuristics-based theorem prover, suggesting that the neural architecture in TRAIL is well suited for representing and processing of logical formalisms. Expand
Learning to Guide a Saturation-Based Theorem Prover
TLDR
TRAIL is the first reinforcement learning-based approach to theorem proving to exceed the performance of a state-of-the-art traditional theorem prover on a standard theorem proving benchmark (solving up to 17% more problems). Expand
Learning to Perform Local Rewriting for Combinatorial Optimization
TLDR
This paper proposes NeuRewriter, a policy to pick heuristics and rewrite the local components of the current solution to iteratively improve it until convergence, which captures the general structure of combinatorial problems and shows strong performance in three versatile tasks. Expand
Graph Neural Networks for Reasoning 2-Quantified Boolean Formulas
TLDR
This paper provides a comprehensive surveying view of applying GNN-based embeddings to 2QBF problems, and aims to offer insights in applying machine learning tools to more complicated symbolic reasoning problems. Expand
Improving SAT Solver Heuristics with Graph Networks and Reinforcement Learning
TLDR
GQSAT is able to reduce the number of iterations required to solve SAT problems by 2-3X, and it generalizes to unsatisfiable SAT instances, as well as to problems with 5X more variables than it was trained on. Expand
From Shallow to Deep Interactions Between Knowledge Representation, Reasoning and Machine Learning
This paper proposes a tentative and original survey of meeting points between Knowledge Representation and Reasoning (KRR) and Machine Learning (ML), two areas which have been developing quiteExpand
Exact Combinatorial Optimization with Graph Convolutional Neural Networks
TLDR
A new graph convolutional neural network model is proposed for learning branch-and-bound variable selection policies, which leverages the natural variable-constraint bipartite graph representation of mixed-integer linear programs. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 55 REFERENCES
Automated Theorem Proving in Intuitionistic Propositional Logic by Deep Reinforcement Learning
TLDR
This paper proposes a deep reinforcement learning algorithm for proof search in intuitionistic propositional logic and shows that its prover outperforms Coq's $\texttt{tauto}$ tactic, a prover based on human-engineered heuristics. Expand
Reinforcement Learning of Theorem Proving
TLDR
A theorem proving algorithm that uses practically no domain heuristics for guiding its connection-style proof search and solves within the same number of inferences over 40% more problems than a baseline prover, which is an unusually high improvement in this hard AI domain. Expand
Learning Rate Based Branching Heuristic for SAT Solvers
TLDR
This paper develops a branching heuristic that is based on a well-known multi-armed bandit algorithm called exponential recency weighted average, called learning rate branching or LRB, and implements it as part of MiniSat and CryptoMiniSat, where it improves on state-of-the-art. Expand
Learning to Solve SMT Formulas
TLDR
This work phrases the challenge of solving SMT formulas as a tree search problem where at each step a transformation is applied to the input formula until the formula is solved, and synthesizes a strategy in the form of a loop-free program with branches to guide the SMT solver to decide formulas more efficiently. Expand
DeepMath - Deep Sequence Models for Premise Selection
TLDR
A two stage approach is proposed that yields good results for the premise selection task on the Mizar corpus while avoiding the hand-engineered features of existing state-of-the-art models. Expand
Fast Numerical Program Analysis with Reinforcement Learning
TLDR
The approach leverages the idea of online decomposition to define a space of new approximate transformers with varying degrees of precision and performance and applies Q-learning with linear function approximation to compute an optimized context-sensitive policy that chooses transformers during analysis. Expand
Learning Continuous Semantic Representations of Symbolic Expressions
TLDR
An exhaustive evaluation on the task of checking equivalence on a highly diverse class of symbolic algebraic and boolean expression types is performed, showing that the proposed neural equivalence networks model significantly outperforms existing architectures. Expand
Learning Combinatorial Optimization Algorithms over Graphs
TLDR
This paper proposes a unique combination of reinforcement learning and graph embedding that behaves like a meta-algorithm that incrementally constructs a solution, and the action is determined by the output of agraph embedding network capturing the current state of the solution. Expand
Towards Generalization in QBF Solving via Machine Learning
TLDR
This paper argues that a solver benefits from generalizing a set of individual wins into a strategy on top of the competitive RAReQS algorithm by utilizing machine learning, which enables learning shorter strategies. Expand
Can Neural Networks Understand Logical Entailment?
TLDR
Results show that convolutional networks present the wrong inductive bias for this class of problems relative to LSTM RNNs, tree-structured neural networks outperform LSTS due to their enhanced ability to exploit the syntax of logic, and PossibleWorldNets outperform all benchmarks. Expand
...
1
2
3
4
5
...