Lagrangian Neural Networks
@article{Cranmer2020LagrangianNN, title={Lagrangian Neural Networks}, author={M. Cranmer and Sam Greydanus and Stephan Hoyer and Peter W. Battaglia and David N. Spergel and Shirley Ho}, journal={ArXiv}, year={2020}, volume={abs/2003.04630} }
Accurate models of the world are built upon notions of its underlying symmetries. In physics, these symmetries correspond to conservation laws, such as for energy and momentum. Yet even though neural network models see increasing use in the physical sciences, they struggle to learn these symmetries. In this paper, we propose Lagrangian Neural Networks (LNNs), which can parameterize arbitrary Lagrangians using neural networks. In contrast to models that learn Hamiltonians, LNNs do not require…
181 Citations
Symmetry Control Neural Networks
- Physics
- 2020
This paper continues the quest for designing the optimal physics bias for neural networks predicting the dynamics of systems when the underlying dynamics shall be inferred from the data directly. The…
Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints
- Computer ScienceNeurIPS
- 2020
This paper introduces a series of challenging chaotic and extended-body systems, including systems with N-pendulums, spring coupling, magnetic fields, rigid rotors, and gyroscopes, and shows that embedding the system into Cartesian coordinates and enforcing the constraints explicitly with Lagrange multipliers dramatically simplifies the learning problem.
Nonseparable Symplectic Neural Networks
- PhysicsICLR
- 2021
A novel neural network architecture is proposed, Nonseparable Symplectic Neural Networks (NSSNNs), to uncover and embed the symplectic structure of a nonseparable Hamiltonian system from limited observation data and show the unique computational merits of the approach to yield long-term, accurate, and robust predictions for large-scale Hamiltonian systems by rigorously enforcing symplectomorphism.
Symplectic Neural Networks in Taylor Series Form for Hamiltonian Systems
- Computer ScienceJ. Comput. Phys.
- 2021
A Differentiable Contact Model to Extend Lagrangian and Hamiltonian Neural Networks for Modeling Hybrid Dynamics
- Computer ScienceArXiv
- 2021
The proposed contact model extends the scope of Lagrangian and Hamiltonian neural networks by allowing simultaneous learning of contact properties and system properties, and can also accommodate inequality constraints, such as limits on the joint angles.
Learning Physical Constraints with Neural Projections
- Computer ScienceNeurIPS
- 2020
This work proposes a new family of neural networks to predict the behaviors of physical systems by learning their underpinning constraints, and provides a multi-group point representation in conjunction with a configurable network connection mechanism to incorporate prior inputs for processing complex physical systems.
Deep Energy-based Modeling of Discrete-Time Physics
- Computer ScienceNeurIPS
- 2020
This study proposes a deep energy-based physical model that admits a specific differential geometric structure that follows the conservation or dissipation law of energy and the mass conservation law, and proposes an automatic discrete differential algorithm that enables neural networks to employ the discrete gradient method.
Benchmarking Energy-Conserving Neural Networks for Learning Dynamics from Data
- Computer Science, PhysicsL4DC
- 2021
This work presents a comparative analysis of the energy-conserving neural networks - for example, deep Lagrangian network, Hamiltonian neural network, etc - wherein the underlying physics is encoded in their computation graph and highlights that using a high-dimensional coordinate system and then imposing restrictions via explicit constraints can lead to higher accuracy in the learned dynamics.
Learning Potentials of Quantum Systems using Deep Neural Networks
- Physics, Computer ScienceAAAI Spring Symposium: MLPS
- 2021
The method, termed Quantum Potential Neural Networks (QPNN), can learn potentials in an unsupervised manner with remarkable accuracy for a wide range of quantum systems, such as the quantum harmonic oscillator, particle in a box perturbed by an external potential, hydrogen atom, Poschl--Teller potential, and a solitary wave system.
Adaptable Hamiltonian neural networks
- Computer SciencePhysical Review Research
- 2021
This work introduces a class of HNNs capable of adaptable prediction of nonlinear physical systems, and demonstrates, using paradigmatic Hamiltonian systems, that training the HNN using time series from as few as four parameter values bestows the neural machine with the ability to predict the state of the target system in an entire parameter interval.
References
SHOWING 1-10 OF 24 REFERENCES
Hamiltonian Neural Networks
- Physics, Computer ScienceNeurIPS
- 2019
Inspiration from Hamiltonian mechanics is drawn to train models that learn and respect exact conservation laws in an unsupervised manner, and this model trains faster and generalizes better than a regular neural network.
Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning
- Computer ScienceICLR
- 2019
The proposed DeLaN network can learn the equations of motion of a mechanical system with a deep network efficiently while ensuring physical plausibility and exhibits substantially improved and more robust extrapolation to novel trajectories and learns online in real-time.
Hamiltonian Graph Networks with ODE Integrators
- Computer ScienceArXiv
- 2019
An approach for imposing physically informed inductive biases in learned simulation models is introduced and it is found that this approach outperforms baselines without these biases in terms of predictive accuracy, energy accuracy, and zero-shot generalization to time-step sizes and integrator orders not experienced during training.
Symplectic Recurrent Neural Networks
- Computer ScienceICLR
- 2020
It is shown that SRNNs succeed reliably on complex and noisy Hamiltonian systems, and how to augment the SRNN integration scheme in order to handle stiff dynamical systems such as bouncing billiards.
Distilling Free-Form Natural Laws from Experimental Data
- PhysicsScience
- 2009
This work proposes a principle for the identification of nontriviality, and demonstrated this approach by automatically searching motion-tracking data captured from various physical systems, ranging from simple harmonic oscillators to chaotic double-pendula, and discovered Hamiltonians, Lagrangians, and other laws of geometric and momentum conservation.
A General Framework for Structured Learning of Mechanical Systems
- Computer ScienceArXiv
- 2019
This work proposes to parameterize a mechanical system using neural networks to model its Lagrangian and the generalized forces that act on it, and shows that its method outperforms a naive, black-box model in terms of data-efficiency, as well as performance in model-based reinforcement learning.
Interaction Networks for Learning about Objects, Relations and Physics
- Computer Science, PhysicsNIPS
- 2016
The interaction network is introduced, a model which can reason about how objects in complex systems interact, supporting dynamical predictions, as well as inferences about the abstract properties of the system, and is implemented using deep neural networks.
Machine learning and serving of discrete field theories
- Computer Science, PhysicsScientific reports
- 2020
The learning algorithm learns a discrete field theory from a set of data of planetary orbits similar to what Kepler inherited from Tycho Brahe in 1601, and the serving algorithm correctly predicts other planetary orbits without learning or knowing Newton’s laws of motion and universal gravitation.
Discovering Symbolic Models from Deep Learning with Inductive Biases
- Computer ScienceNeurIPS
- 2020
The correct known equations, including force laws and Hamiltonians, can be extracted from the neural network and a new analytic formula is discovered which can predict the concentration of dark matter from the mass distribution of nearby cosmic structures.
Neural Ordinary Differential Equations
- Computer ScienceNeurIPS
- 2018
This work shows how to scalably backpropagate through any ODE solver, without access to its internal operations, which allows end-to-end training of ODEs within larger models.