# Why is AI hard and Physics simple?

@article{Roberts2021WhyIA, title={Why is AI hard and Physics simple?}, author={Daniel A. Roberts}, journal={ArXiv}, year={2021}, volume={abs/2104.00008} }

We discuss why AI is hard and why physics is simple. We discuss how physical intuition and the approach of theoretical physics can be brought to bear on the field of artificial intelligence and specifically machine learning. We suggest that the underlying project of machine learning and the underlying project of physics are strongly coupled through the principle of sparsity, and we call upon theoretical physicists to work on AI as physicists. As a first step in that direction, we discuss an…

## 3 Citations

The edge of chaos: quantum field theory and deep neural networks

- Computer ScienceSciPost Physics
- 2022

This work explicitly construct the quantum field theory corresponding to a general class of deep neural networks encompassing both recurrent and feedforward architectures, and provides a first-principles approach to the rapidly emerging NN-QFT correspondence.

Exact priors of finite neural networks

- Computer ScienceArXiv
- 2021

This work derives exact solutions for the output priors for individual input examples of a class of finite fully-connected feedforward Bayesian neural networks.

Deep Learning Approaches to Surrogates for Solving the Diffusion Equation for Mechanistic Real-World Simulations

- Computer ScienceFrontiers in Physiology
- 2021

A Convolutional Neural Network is used to approximate the stationary solution to the diffusion equation in the case of two equal-diameter, circular, constant-value sources located at random positions in a two-dimensional square domain with absorbing boundary conditions.

## References

SHOWING 1-10 OF 86 REFERENCES

How to Grow a Mind: Statistics, Structure, and Abstraction

- Computer Science, PsychologyScience
- 2011

This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems.

Building machines that learn and think like people

- Computer ScienceBehavioral and Brain Sciences
- 2016

It is argued that truly human-like learning and thinking machines should build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems, and harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations.

Understanding deep learning requires rethinking generalization

- Computer ScienceICLR
- 2017

These experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data, and confirm that simple depth two neural networks already have perfect finite sample expressivity.

Complexity, action, and black holes

- Physics
- 2016

In an earlier paper "Complexity Equals Action" we conjectured that the quantum computational complexity of a holographic state is given by the classical action of a region in the bulk (the…

A Note on Lazy Training in Supervised Differentiable Programming

- Computer ScienceArXiv
- 2018

In a simplified setting, it is proved that "lazy training" essentially solves a kernel regression, and it is shown that this behavior is not so much due to over-parameterization than to a choice of scaling, often implicit, that allows to linearize the model around its initialization.

Holographic Complexity Equals Bulk Action?

- PhysicsPhysical review letters
- 2016

The hypothesis that black holes are the fastest computers in nature is discussed and the conjecture that the quantum complexity of a holographic state is dual to the action of a certain spacetime region that is called a Wheeler-DeWitt patch is illustrated.

Deep Information Propagation

- Computer ScienceICLR
- 2017

The presence of dropout destroys the order-to-chaos critical point and therefore strongly limits the maximum trainable depth for random networks, and a mean field theory for backpropagation is developed that shows that the ordered and chaotic phases correspond to regions of vanishing and exploding gradient respectively.

On the Expressive Power of Deep Neural Networks

- Computer ScienceICML
- 2017

We propose a new approach to the problem of neural network expressivity, which seeks to characterize how structural properties of a neural network family affect the functions it is able to compute.…

ON THE EINSTEIN PODOLSKY ROSEN PARADOX*

- Physics
- 2017

THE paradox of Einstein, Podolsky and Rosen [1] was advanced as an argument that quantum mechanics could not be a complete theory but should be supplemented by additional variables. These additional…

Towards Explaining the Regularization Effect of Initial Large Learning Rate in Training Neural Networks

- Computer ScienceNeurIPS
- 2019

A setting is devised in which it can be proved that a two layer network trained with large initial learning rate and annealing provably generalizes better than the same networktrained with a small learning rate from the start.