• Corpus ID: 244729473

Neural Symplectic Integrator with Hamiltonian Inductive Bias for the Gravitational $N$-body Problem

@article{Cai2021NeuralSI,
  title={Neural Symplectic Integrator with Hamiltonian Inductive Bias for the Gravitational \$N\$-body Problem},
  author={Maxwell Xu Cai and Simon Portegies Zwart and Damian Podareanu},
  journal={ArXiv},
  year={2021},
  volume={abs/2111.15631}
}
The gravitational N -body problem, which is fundamentally important in astrophysics to predict the motion ofN celestial bodies under the mutual gravity of each other, is usually solved numerically because there is no known general analytical solution for N > 2. Can an N -body problem be solved accurately by a neural network (NN)? Can a NN observe long-term conservation of energy and orbital angular momentum? Inspired by Wistom & Holman’s symplectic map, we present a neural N -body integrator… 

Figures from this paper

Learning Neural Hamiltonian Dynamics: A Methodological Overview

This paper systematically survey recently proposed Hamiltonian neural network models, with a special emphasis on methodologies, and discusses the major contributions of these models in four overlapping directions.

References

SHOWING 1-10 OF 23 REFERENCES

Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints

This paper introduces a series of challenging chaotic and extended-body systems, including systems with N-pendulums, spring coupling, magnetic fields, rigid rotors, and gyroscopes, and shows that embedding the system into Cartesian coordinates and enforcing the constraints explicitly with Lagrange multipliers dramatically simplifies the learning problem.

Symplectic maps for the N-body problem.

The present study generalizes the mapping method of Wisdom (1982) to encompass all gravitational n-body problems with a dominant central mass. The rationale for the generalized mapping method is

Symplectic ODE-Net: Learning Hamiltonian Dynamics with Control

This paper introduces Symplectic ODE-Net, a deep learning framework which can infer the dynamics of a physical system, given by an ordinary differential equation (ODE), from observed state trajectories and proposes a parametrization which can enforce this Hamiltonian formalism even when the generalized coordinate data is embedded in a high-dimensional space or the authors can only access velocity data instead of generalized momentum.

Hamiltonian Neural Networks

Inspiration from Hamiltonian mechanics is drawn to train models that learn and respect exact conservation laws in an unsupervised manner, and this model trains faster and generalizes better than a regular neural network.

The Eccentric Kozai-Lidov Effect and Its Applications

The hierarchical triple-body approximation has useful applications to a variety of systems from planetary and stellar scales to supermassive black holes. In this approximation, the energy of each

Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations

This two part treatise introduces physics informed neural networks – neural networks that are trained to solve supervised learning tasks while respecting any given law of physics described by general nonlinear partial differential equations and demonstrates how these networks can be used to infer solutions topartial differential equations, and obtain physics-informed surrogate models that are fully differentiable with respect to all input coordinates and free parameters.

Symplectic Recurrent Neural Networks

It is shown that SRNNs succeed reliably on complex and noisy Hamiltonian systems, and how to augment the SRNN integration scheme in order to handle stiff dynamical systems such as bouncing billiards.

Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations

We introduce physics informed neural networks -- neural networks that are trained to solve supervised learning tasks while respecting any given law of physics described by general nonlinear partial

HIGAN: Cosmic Neutral Hydrogen with Generative Adversarial Networks

This work uses Wasserstein Generative Adversarial Networks (WGANs) to generate new high-resolution 3D realizations of cosmic HI at $z=5, and samples reproduce the abundance of HI across 9 orders of magnitude, from the Ly$\alpha$ forest to Damped Lyman Absorbers.