Branes with brains: exploring string vacua with deep reinforcement learning

  title={Branes with brains: exploring string vacua with deep reinforcement learning},
  author={James Halverson and Brent D. Nelson and Fabian Ruehle},
  journal={Journal of High Energy Physics},
A bstractWe propose deep reinforcement learning as a model-free method for exploring the landscape of string vacua. As a concrete application, we utilize an artificial intelligence agent known as an asynchronous advantage actor-critic to explore type IIA compactifications with intersecting D6-branes. As different string background configurations are explored by changing D6-brane configurations, the agent receives rewards and punishments related to string consistency conditions and proximity to… 

Heterotic String Model Building with Monad Bundles and Reinforcement Learning

By focusing on two specific manifolds with Picard numbers two and three, it is shown that reinforcement learning can be used successfully to explore monad bundles and hundreds of new candidate standard models are found.

Revealing systematics in phenomenologically viable flux vacua with reinforcement learning

It is demonstrated in the case of the type IIB flux landscape that vacua with requirements on the expectation value of the superpotential and the string coupling can be sampled significantly faster by using reinforcement learning than by using metropolis or random sampling.

Evolving Heterotic Gauge Backgrounds: Genetic Algorithms versus Reinforcement Learning

The immensity of the string landscape and the difficulty of identifying solutions that match the observed features of particle physics have raised serious questions about the predictive power of

Learning the Principle of Least Action with Reinforcement Learning

This work verified the idea by using a Q-Learning based algorithm on learning how light propagates in materials with different refraction indices, and showed that the agent could recover the minimal-time path equivalent to the solution obtained by Snell's law or Fermat's Principle.

Explore and Exploit with Heterotic Line Bundle Models

Deep reinforcement learning is used to explore a class of heterotic SU(5) GUT models constructed from line bundle sums over Complete Intersection Calabi Yau (CICY) manifolds and concludes that the agents detect hidden structures in the compactification data, which is partly of general nature.

Probing the Structure of String Theory Vacua with Genetic Algorithms and Reinforcement Learning

This work is able to reveal novel features in the string theory solutions required for properties such as the string coupling, and combines results from both search methods, which it is argued is imperative for reducing sampling bias.

When does reinforcement learning stand out in quantum control? A comparative study on state preparation

A comparative study on the efficacy of three reinforcement learning algorithms: tabular Q- learning, deep Q-learning, and policy gradient, as well as two non-machine-learning methods: stochastic gradient descent and Krotov algorithms, in the problem of preparing a desired quantum state is performed.

Breeding Realistic D‐Brane Models

This work phrase the problem of finding consistent intersecting D‐brane models in terms of genetic algorithms, which mimic natural selection to evolve a population collectively towards optimal solutions, and shows that O(30%) of the found models contain the desired Standard Model gauge group factor.

Noise-Robust End-to-End Quantum Control using Deep Autoregressive Policy Networks

This work presents a hybrid policy gradient algorithm capable of simultaneously optimizing continuous and discrete degrees of freedom in an uncertainty-resilient way, modeled by a deep autoregressive neural network to capture causality.

Quark Mass Models and Reinforcement Learning

It is shown that neural networks can be successfully trained to construct Froggatt-Nielsen models which are consistent with the observed quark masses and mixing and capable of finding models proposed in the literature when starting at nearby configurations.



Human-level control through deep reinforcement learning

This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

Asynchronous Methods for Deep Reinforcement Learning

A conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers and shows that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.

Mastering the game of Go with deep neural networks and tree search

Using this search algorithm, the program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0.5, the first time that a computer program has defeated a human professional player in the full-sized game of Go.

Machine learning in the string landscape

Deep data dives and conjecture generation are proposed as useful frameworks for utilizing machine learning in the landscape, and examples of each are presented.

Reinforcement Learning in Different Phases of Quantum Control

This work implements cutting-edge Reinforcement Learning techniques and shows that their performance is comparable to optimal control methods in the task of finding short, high-fidelity driving protocol from an initial to a target state in non-integrable many-body quantum systems of interacting qubits.

Reinforcement Learning: An Introduction

This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.

Mastering the game of Go without human knowledge

An algorithm based solely on reinforcement learning is introduced, without human data, guidance or domain knowledge beyond game rules, that achieves superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.

Learning to inflate. A gradient ascent approach to random inflation

  • Tom Rudelius
  • Computer Science, Economics
    Journal of Cosmology and Astroparticle Physics
  • 2019
A novel method for randomly generating inflationary potentials is introduced, which treats the Taylor coefficients of the potential as weights in a single-layer neural network and uses gradient ascent to maximize the number of e-folds of inflation.

Evolving neural networks with genetic algorithms to study the string landscape

Three areas in which neural networks can be applied are studied: to classify models according to a fixed set of (physically) appealing features, to find a concrete realization for a computation for which the precise algorithm is known in principle but very tedious to actually implement, and to predict or approximate the outcome of some involved mathematical computation which performs too inefficient to apply it.

Neural Combinatorial Optimization with Reinforcement Learning

A framework to tackle combinatorial optimization problems using neural networks and reinforcement learning, and Neural Combinatorial Optimization achieves close to optimal results on 2D Euclidean graphs with up to 100 nodes.