# Holomorphic feedforward networks

@article{Douglas2021HolomorphicFN, title={Holomorphic feedforward networks}, author={Michael R. Douglas}, journal={ArXiv}, year={2021}, volume={abs/2105.03991} }

A very popular model in machine learning is the feedforward neural network (FFN). The FFN can approximate general functions and mitigate the curse of dimensionality. Here we introduce FFNs which represent sections of holomorphic line bundles on complex manifolds, and ask some questions about their approximating power. We also explain formal similarities between the standard approach to supervised learning and the problem of finding numerical Ricci flat Kähler metrics, which allow carrying some…

## 3 Citations

Numerical Calabi-Yau metrics from holomorphic networks.

- Physics, Mathematics
- 2020

We propose machine learning inspired methods for computing numerical Calabi-Yau (Ricci flat K\"ahler) metrics, and implement them using Tensorflow/Keras. We compare them with previous work, and find…

Machine Learning Line Bundle Connections

- Physics
- 2021

Anthony Ashmore, 2, ∗ Rehan Deen, † Yang-Hui He, 5, 6, 7, ‡ and Burt A. Ovrut § Kadanoff Center for Theoretical Physics, University of Chicago,, IL 60637, USA Sorbonne Université, CNRS, Laboratoire…

Calabi-Yau Metrics, Energy Functionals and Machine-Learning

- Computer Science, PhysicsArXiv
- 2021

This paper presents a state-of-the-art simulation of the response of the immune system to high-energy particles using a probabilistic approach.

## References

SHOWING 1-10 OF 32 REFERENCES

Energy functionals for Calabi-Yau metrics

- Physics, Mathematics
- 2009

We identify a set of "energy" functionals on the space of metrics in a given Kaehler class on a Calabi-Yau manifold, which are bounded below and minimized uniquely on the Ricci-flat metric in that…

Bergman Kernel from Path Integral

- Mathematics, Physics
- 2009

We rederive the expansion of the Bergman kernel on Kähler manifolds developed by Tian, Yau, Zelditch, Lu and Catlin, using path integral and perturbation theory, and generalize it to supersymmetric…

On the Banach spaces associated with multi-layer ReLU networks: Function representation, approximation theory and gradient descent dynamics

- Mathematics, Computer ScienceCSIAM Transactions on Applied Mathematics
- 2020

The gradient flow defined this way is the natural continuous analog of the gradient descent dynamics for the associated multi-layer neural networks, and it is shown that the path-norm increases at most polynomially under this continuous gradient flow dynamics.

Understanding deep learning requires rethinking generalization

- Computer ScienceICLR
- 2017

These experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data, and confirm that simple depth two neural networks already have perfect finite sample expressivity.

Solving the quantum many-body problem with artificial neural networks

- Computer Science, PhysicsScience
- 2017

A variational representation of quantum states based on artificial neural networks with a variable number of hidden neurons and a reinforcement-learning scheme that is capable of both finding the ground state and describing the unitary time evolution of complex interacting quantum systems.

Critical Points and Supersymmetric Vacua I

- Mathematics, Physics
- 2004

Supersymmetric vacua (‘universes’) of string/M theory may be identified with certain critical points of a holomorphic section (the ‘superpotential’) of a Hermitian holomorphic line bundle over a…

Numerical Calabi-Yau metrics from holomorphic networks.

- Physics, Mathematics
- 2020

We propose machine learning inspired methods for computing numerical Calabi-Yau (Ricci flat K\"ahler) metrics, and implement them using Tensorflow/Keras. We compare them with previous work, and find…

Solving high-dimensional partial differential equations using deep learning

- Mathematics, Computer ScienceProceedings of the National Academy of Sciences
- 2018

A deep learning-based approach that can handle general high-dimensional parabolic PDEs using backward stochastic differential equations and the gradient of the unknown solution is approximated by neural networks, very much in the spirit of deep reinforcement learning with the gradient acting as the policy function.

Reconciling modern machine-learning practice and the classical bias–variance trade-off

- Medicine, Computer ScienceProceedings of the National Academy of Sciences
- 2019

This work shows how classical theory and modern practice can be reconciled within a single unified performance curve and proposes a mechanism underlying its emergence, and provides evidence for the existence and ubiquity of double descent for a wide spectrum of models and datasets.

On a set of polarized Kähler metrics on algebraic manifolds

- Mathematics
- 1990

A projective algebraic manifold M is a complex manifold in certain projective space CP, N > dim c M = n . The hyperplane line bundle of CP restricts to an ample line bundle L on M. This bundle L is a…