• Corpus ID: 139105904

Graph Neural Reasoning for 2-Quantified Boolean Formula Solvers

@article{Yang2019GraphNR,
  title={Graph Neural Reasoning for 2-Quantified Boolean Formula Solvers},
  author={Zhanfu Yang and Fei Wang and Ziliang Chen and Guannan Wei and Tiark Rompf},
  journal={ArXiv},
  year={2019},
  volume={abs/1904.12084}
}
In this paper, we investigate the feasibility of learning GNN (Graph Neural Network) based solvers and GNN-based heuristics for specified QBF (Quantified Boolean Formula) problems. [] Key Method Then we show how to learn a heuristic CEGAR 2QBF solver. We further explore generalizing GNN-based heuristics to larger unseen instances, and uncover some interesting challenges. In summary, this paper provides a comprehensive surveying view of applying GNN-embeddings to specified QBF solvers, and aims to offer…

Tables from this paper

Graph Neural Reasoning May Fail in Certifying Boolean Unsatisfiability

It is conjectured with some evidences, that generally-defined GNNs present several limitations to certify the unsatisfiability (UNSAT) in Boolean formulae, which implies that GNN’s may probably fail in learning the logical reasoning tasks if they contain proving UNSAT as the sub-problem included by most predicate logic formULae.

Learning Heuristics for Quantified Boolean Formulas through Reinforcement Learning

This work demonstrates how to learn efficient heuristics for automated reasoning algorithms for quantified Boolean formulas through deep reinforcement learning on a backtracking search algorithm that solves significantly more formulas compared to the existing handwritten heuristic.

OCTAL: Graph Representation Learning for LTL Model Checking

A novel GRL-based framework OCTAL, is designed to learn the representation of the graph-structured system and specification, which reduces the model checking problem to binary classification in the latent space.

Enhancing SAT solvers with glue variable predictions

This work trains a simpler network architecture allowing CPU inference for even large industrial problems with millions of clauses, and training instead to predict {\em glue variables}---a target for which it is easier to generate labelled data, and which can also be formulated as a reinforcement learning task.

A GNN Based Approach to LTL Model Checking

This paper expresses the model as a GNN, and proposes a novel node embedding framework that encodes the LTL property and characteristics of the model, and shows that the experimental results show that the framework is up to 17 times faster than state-of-the-art tools.

From Shallow to Deep Interactions Between Knowledge Representation, Reasoning and Machine Learning

This paper proposes a tentative and original survey of meeting points between Knowledge Representation and Reasoning (KRR) and Machine Learning (ML), two areas which have been developing quite

From Shallow to Deep Interactions Between Knowledge Representation, Reasoning and Machine Learning (Kay R. Amel group)

This paper proposes a tentative and original survey of meeting points between Knowledge Representation and Reasoning (KRR) and Machine Learning (ML), two areas which have been developing quite

Bidirectional Graph Reasoning Network for Panoptic Segmentation

A Bidirectional Graph Reasoning Network (BGRNet), which incorporates graph structure into the conventional panoptic segmentation network to mine the intra-modular and inter- modular relations within and between foreground things and background stuff classes, and proposes a Biddirectional Graph Connection Module to diffuse information across branches in a learnable fashion.

References

SHOWING 1-10 OF 17 REFERENCES

Abstraction-Based Algorithm for 2QBF

This paper proposes an algorithm for solving 2QBF satisfiability by counterexample guided abstraction refinement (CEGAR) and presents a comparison of a prototype implementing the presented algorithm to state of the art QBF solvers, showing that a larger set of instances is solved.

Towards Generalization in QBF Solving via Machine Learning

This paper argues that a solver benefits from generalizing a set of individual wins into a strategy on top of the competitive RAReQS algorithm by utilizing machine learning, which enables learning shorter strategies.

Solving QBF with counterexample guided refinement

Two promising avenues in QBF are opened: CEGAR-driven solvers as an alternative to existing approaches and a novel type of learning in DPLL.

Learning a SAT Solver from Single-Bit Supervision

Although it is not competitive with state-of-the-art SAT solvers, NeuroSAT can solve problems that are substantially larger and more difficult than it ever saw during training by simply running for more iterations.

Computing Vertex Eccentricity in Exponentially Large Graphs: QBF Formulation and Solution

This work proposes a novel SAT-based decision procedure optimized for Quantified Boolean Formulas (QBFs) and presents encouraging experimental evidence showing its superiority to other public-domain solvers.

A Model for Generating Random Quantified Boolean Formulas

This work defines and study a general model for generating random QBF instances, and exhibits experimental results showing that the model bears certain desirable similarities to the random SAT model, as well as a number of theoretical results concerning the model.

An Effective Algorithm for the Futile Questioning Problem

This paper develops a solution algorithm for the general case of Q-ALL SAT that uses a backtracking search and a new form of learning of clauses and is substantially faster than state-of-the-art solvers for quantified Boolean formulas.

Incremental Determinization

The algorithm is presented in analogy to search algorithms for SAT and explains how propagation, decisions, and conflicts are lifted from values to Skolem functions.

Learning To Solve Circuit-SAT: An Unsupervised Differentiable Approach

A neural framework that can learn to solve the Circuit Satisfiability problem by building upon a rich embedding architecture that encodes the problem structure and an end-to-end differentiable training procedure that mimics Reinforcement Learning and trains the model directly toward solving the SAT problem.

The complexity of theorem-proving procedures

  • S. Cook
  • Mathematics, Computer Science
    STOC
  • 1971
It is shown that any recognition problem solved by a polynomial time-bounded nondeterministic Turing machine can be “reduced” to the problem of determining whether a given propositional formula is a