Extracting Interpretable Physical Parameters from Spatiotemporal Systems using Unsupervised Learning

@article{Lu2019ExtractingIP,
  title={Extracting Interpretable Physical Parameters from Spatiotemporal Systems using Unsupervised Learning},
  author={Peter Y. Lu and Samuel Kim and Marin Solja{\vc}i{\'c}},
  journal={ArXiv},
  year={2019},
  volume={abs/1907.06011}
}
Experimental data is often affected by uncontrolled variables that make analysis and interpretation difficult. For spatiotemporal systems, this problem is further exacerbated by their intricate dynamics. Modern machine learning methods are particularly well-suited for analyzing and modeling complex datasets, but to be effective in science, the result needs to be interpretable. We demonstrate an unsupervised learning technique for extracting interpretable physical parameters from noisy… 

Figures and Tables from this paper

Discovering Sparse Interpretable Dynamics from Partial Observations
TLDR
An artificial intelligence framework that can learn the correct equations of motion for nonlinear systems from incomplete data is introduced, and opens up the door to applying interpretable machine learning techniques on a wide range of applications in the field of nonlinear dynamics.
Discovering Dynamical Parameters by Interpreting Echo State Networks
TLDR
This work shows that the parameters governing the dynamics of a complex nonlinear system can be encoded in the learned readout layer of an ESN, and provides a computationally inexpensive, unsupervised data-driven approach for identifying uncontrolled variables affecting real-world data from nonlinear dynamical systems.
Unraveling hidden interactions in complex systems with deep learning
TLDR
This study proposes AgentNet, a model-free data-driven framework consisting of deep neural networks to reveal and analyze the hidden interactions in complex systems from observed data alone, and expects the framework to open a novel path to investigating complex systems and to provide insight into general process-driven modeling.
GD-VAEs: Geometric Dynamic Variational Autoencoders for Learning Nonlinear Dynamics and Dimension Reductions
TLDR
The performance of the methods referred to as GD-VAEs are investigated on tasks for learning low dimensional representations of the nonlinear Burgers equations, constrained mechanical systems, and spatial fields of reactiondiffusion systems.
Deep learning reveals hidden interactions in complex systems
TLDR
This study proposes AgentNet, a model-free data-driven framework consisting of deep neural networks to reveal and analyze the hidden interactions in complex systems from observed data alone, and expects the framework to open a novel path to investigating complex systems and to provide insight into general process-driven modeling.
Learning Generalized Quasi-Geostrophic Models Using Deep Neural Numerical Models
TLDR
An advection-based fully differentiable numerical scheme, where parts of the computations can be replaced with learnable ConvNets, and make connections with the single-layer Quasi-Geostrophic (QG) model, a baseline theory in physical oceanography developed decades ago is developed.
Integration of Neural Network-Based Symbolic Regression in Deep Learning for Scientific Discovery
TLDR
This article uses a neural network-based architecture for symbolic regression called the equation learner (EQL) network and integrates it with other deep learning architectures such that the whole system can be trained end-to-end through backpropagation.
Symbolic Pregression: Discovering Physical Laws from Raw Distorted Video
TLDR
The pre-regression step is able to rediscover Cartesian coordinates of unlabeled moving objects even when the video is distorted by a generalized lens and the pregression step is facilitated by adding extra latent space dimensions to avoid topological problems during training and removing these extra dimensions via principal component analysis.
Learning Order Parameters from Videos of Dynamical Phases for Skyrmions with Neural Networks
TLDR
The main purposes of this paper are to use neural networks for classifying the dynamical phases of some videos and to demonstrate that neural networks can learn physical concepts from them and to propose a parameter visualization scheme to interpret what neural networks have learned.
Sparsely Constrained Neural Networks for Model Discovery of PDEs
TLDR
A modular framework that combines deep-learning based approaches with an arbitrary sparse regression technique and demonstrates with several examples that this combination facilitates and enhances model discovery tasks.
...
...

References

SHOWING 1-10 OF 73 REFERENCES
Deep learning of dynamics and signal-noise decomposition with time-stepping constraints
Variational encoding of complex dynamics.
TLDR
The use of a time-lagged VAE, or variational dynamics encoder (VDE), to reduce complex, nonlinear processes to a single embedding with high fidelity to the underlying dynamics and how the VDE is able to capture nontrivial dynamics in a variety of examples.
Data-driven discovery of PDEs in complex datasets
Deep Hidden Physics Models: Deep Learning of Nonlinear Partial Differential Equations
  • M. Raissi
  • Computer Science
    J. Mach. Learn. Res.
  • 2018
TLDR
This work puts forth a deep learning approach for discovering nonlinear partial differential equations from scattered and potentially noisy observations in space and time by approximate the unknown solution as well as the nonlinear dynamics by two deep neural networks.
Uniformly accurate machine learning-based hydrodynamic models for kinetic equations
TLDR
It is demonstrated that machine learning can indeed help to build reliable multiscale models for problems with which classical multISCale methods have had trouble and that the reduced model achieves a uniform accuracy in a wide range of Knudsen numbers spanning from the hydrodynamic limit to free molecular flow.
PDE-Net: Learning PDEs from Data
TLDR
Numerical experiments show that the PDE-Net has the potential to uncover the hidden PDE of the observed dynamics, and predict the dynamical behavior for a relatively long time, even in a noisy environment.
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
TLDR
This paper theoretically shows that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data, and trains more than 12000 models covering most prominent methods and evaluation metrics on seven different data sets.
Unsupervised learning of phase transitions: from principal component analysis to variational autoencoders
  • S. Wetzel
  • Computer Science
    Physical review. E
  • 2017
TLDR
Unsupervised machine learning techniques to learn features that best describe configurations of the two-dimensional Ising model and the three-dimensional XY model are examined, finding that the most promising algorithms are principal component analysis and variational autoencoders.
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial
...
...