Corpus ID: 8242113

DeepMath - Deep Sequence Models for Premise Selection

@inproceedings{Irving2016DeepMathD,
  title={DeepMath - Deep Sequence Models for Premise Selection},
  author={Geoffrey Irving and Christian Szegedy and Alexander Amir Alemi and N. E{\'e}n and François Chollet and J. Urban},
  booktitle={NIPS},
  year={2016}
}
We study the effectiveness of neural sequence models for premise selection in automated theorem proving, one of the main bottlenecks in the formalization of mathematics. We propose a two stage approach for this task that yields good results for the premise selection task on the Mizar corpus while avoiding the hand-engineered features of existing state-of-the-art models. To our knowledge, this is the first time deep learning has been applied to theorem proving on a large scale. 
136 Citations
Premise Selection for Theorem Proving by Deep Graph Embedding
Premise Selection in Natural Language Mathematical Texts
Tree-Structure CNN for Automated Theorem Proving
Stateful Premise Selection by Recurrent Neural Networks
Guiding Inferences in Connection Tableau by Recurrent Neural Networks
Hammering Mizar by Learning Clause Guidance
DeepCoder: Learning to Write Programs
Proof Artifact Co-training for Theorem Proving with Language Models
Tree-Structured Variational Autoencoder
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 56 REFERENCES
A Neural Conversational Model
Sentence Pair Scoring: Towards Unified Framework for Text Comprehension
Learning to Execute
End-To-End Memory Networks
Semi-supervised Sequence Learning
Learning-Assisted Automated Reasoning with Flyspeck
Stronger Automation for Flyspeck by Feature Weighting and Strategy Evolution
...
1
2
3
4
5
...