Fast and Slow Enigmas and Parental Guidance

@inproceedings{Goertzel2021FastAS,
  title={Fast and Slow Enigmas and Parental Guidance},
  author={Zarathustra Amadeus Goertzel and Karel Chvalovsk{\'y} and Jan Jakubuv and Miroslav Ols{\'a}k and Josef Urban},
  booktitle={International Symposium on Frontiers of Combining Systems},
  year={2021}
}
We describe several additions to the ENIGMA system that guides clause selection in the E automated theorem prover. First, we significantly speed up its neural guidance by adding server-based GPU evaluation. The second addition is motivated by fast weight-based rejection filters that are currently used in systems like E and Prover9. Such systems can be made more intelligent by instead training fast versions of ENIGMA that implement more intelligent pre-filtering. This results in combinations of… 

The Isabelle ENIGMA

The authors' final best single-strategy ENIGMA and premise selection system improves the best previous version of E by 25.3% in 15 seconds, outperforming also all other previous ATP and SMT systems.

Solving Quantitative Reasoning Problems with Language Models

Language models have achieved remarkable performance on a wide range of tasks that require natural language understanding. Nevertheless, state-of-the-art models have generally struggled with tasks

References

SHOWING 1-10 OF 41 REFERENCES

ENIGMA-NG: Efficient Neural and Gradient-Boosted Inference Guidance for E

The resulting methods improve on the manually designed clause guidance, providing the first practically convincing application of gradient-boosted and neural clause guidance in saturation-style automated theorem provers.

ENIGMA Anonymous: Symbol-Independent Inference Guiding Machine (System Description)

An implementation of gradient boosting and neural guidance of saturation-style automated theorem provers that does not depend on consistent symbol names across problems is described and is evaluated on the MPTP large-theory benchmark and shown to achieve comparable real-time performance to state-of-the-art symbol-based methods.

ENIGMAWatch: ProofWatch Meets ENIGMA

A new learning-based proof guidance -- ENIGMAWatch -- for saturation-style first-order theorem provers and it is shown that the added proof-matching information is considered important by the statistical machine learners, and that it leads to improvements in E's Performance over ProofWatch and ENIGma.

Enhancing ENIGMA Given Clause Guidance

Several additions to ENIGMA are described, including better clause features, adding conjecture features as the proof state characterization, better data pre-processing, and repeated model learning.

ENIGMA: Efficient Learning-Based Inference Guiding Machine

ENIGMA is a learning-based method for guiding given clause selection in saturation-based theorem provers showing a large increase of E’s performance.

Efficient Implementation of Large-Scale Watchlists

Techniques for improving the performance of the automated theorem proving system E when dealing with large watchlists are explored and a new index for the frequent special case of unit clause hints is introduced, taking advantage of the fact that subsumption can be implemented much more efficiently for unit clauses than for the general case.

ProofWatch: Watchlist Guidance for Large Theories in E

This work designs watchlist-based clause evaluation heuristics inside the E ATP system, and develops new proof guiding algorithms that load many previous proofs inside the ATP and focus the proof search using a dynamically updated notion of proof matching.

Citius altius fortius: Lessons learned from the Theorem Prover WALDMEISTER

BliStr: The Blind Strategymaker

The technique was used to significantly strengthen the set of E strategies used by the MaLARea, PS-E, E-MaLeS, and E systems in the CASC@Turing 2012 competition, particularly in the Mizar division.

Breeding Theorem Proving Heuristics with Genetic Algorithms

A way to automatize the selection of the next clause to process with the given clause algorithms using genetic algorithms, evaluating a population of different strategies on a test set, and applying mutation and crossover operators to good strategies to create the next generation is described.