# Deep multi-task mining Calabi–Yau four-folds

@article{Erbin2022DeepMM, title={Deep multi-task mining Calabi–Yau four-folds}, author={Harold Erbin and Riccardo Finotello and Robin Schneider and Mohamed Tamaazousti}, journal={Machine Learning: Science and Technology}, year={2022}, volume={3} }

We continue earlier efforts in computing the dimensions of tangent space cohomologies of Calabi–Yau manifolds using deep learning. In this paper, we consider the dataset of all Calabi–Yau four-folds constructed as complete intersections in products of projective spaces. Employing neural networks inspired by state-of-the-art computer vision architectures, we improve earlier benchmarks and demonstrate that all four non-trivial Hodge numbers can be learned at the same time using a multi-task…

## 4 Citations

### Identifying equivalent Calabi-Yau topologies: A discrete challenge from math and physics for machine learning

- MathematicsArXiv
- 2022

We review briefly the characteristic topological data of Calabi–Yau threefolds and focus on the question of when two threefolds are equivalent through related topological data. This provides an…

### Machine learning Calabi-Yau hypersurfaces

- Mathematics, Computer SciencePhysical Review D
- 2022

This work revisits the classic database of weighted-Ps which admit Calabi-Yau 3-fold hypersurfaces equipped with a diverse set of tools from the machine-learning toolbox and identifies a previously unnoticed clustering in the Calabi/Yau data.

### Machine Learning Algebraic Geometry for Physics

- Computer Science
- 2022

A chapter contribution to the book Machine learning and Algebraic Geometry, edited by A. Kasprzyk et al.

### Algorithmically Solving the Tadpole Problem

- Computer ScienceAdvances in Applied Clifford Algebras
- 2022

The results support the Tadpole Conjecture: the minimal charge grows linearly with the dimension of the lattice and, for K3 $$\times $$ × K3, this charge is larger than allowed by tadpole cancellation.

## References

SHOWING 1-10 OF 59 REFERENCES

### Machine learning Calabi-Yau four-folds

- Mathematics, Computer SciencePhysics Letters B
- 2021

### Machine learning for complete intersection Calabi-Yau manifolds: a methodological study

- Computer ScienceArXiv
- 2020

Improved accuracy of ML computations for Hodge numbers with respect to the existing literature is obtained, serving as a proof of concept that neural networks can be valuable to study the properties of geometries appearing in string theory.

### Inception neural network for complete intersection Calabi–Yau 3-folds

- Computer Science, MathematicsMach. Learn. Sci. Technol.
- 2021

A neural network inspired by Google’s Inception model is introduced to compute the Hodge number h 1,1 of complete intersection Calabi–Yau (CICY) 3-folds, giving already 97% of accuracy with just 30% of the data for training.

### Rethinking the Inception Architecture for Computer Vision

- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016

This work is exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization.

### Explore and Exploit with Heterotic Line Bundle Models

- Computer ScienceFortschritte der Physik
- 2020

Deep reinforcement learning is used to explore a class of heterotic SU(5) GUT models constructed from line bundle sums over Complete Intersection Calabi Yau (CICY) manifolds and concludes that the agents detect hidden structures in the compactification data, which is partly of general nature.

### Deep-Learning the Landscape

- Physics
- 2017

We propose a paradigm to deep-learn the ever-expanding databases which have emerged in mathematical physics and particle phenomenology, as diverse as the statistics of string vacua or combinatorial…

### ImageNet classification with deep convolutional neural networks

- Computer ScienceCommun. ACM
- 2012

A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.