Energy networks for state estimation with random sensors using sparse labels

  title={Energy networks for state estimation with random sensors using sparse labels},
  author={Yash Kumar and Souvik Lal Chakraborty},
State estimation is required whenever we deal with high-dimensional dynamical systems, as the complete measurement is often unavailable. It is key to gaining insight, performing control or optimizing design tasks. Most deep learning-based approaches require high-resolution labels and work with fixed sensor locations, thus being restrictive in their scope. Also, doing Proper orthogonal decomposition (POD) on sparse data is nontrivial. To tackle these problems, we propose a technique with an… 



Leveraging reduced-order models for state estimation using deep learning

A neural network is a natural choice for this estimation problem, as a physical interpretation of the reduced state–sensor measurement relationship is rarely obvious, and is found to outperform common linear estimation alternatives.

Shallow Learning for Fluid Flow Reconstruction with Limited Sensors and Limited Data

This work proposes a shallow neural network-based learning methodology for fluid flow reconstruction that learns an end-to-end mapping between the sensor measurements and the high-dimensional fluid flow field, without any heavy preprocessing on the raw data.

Modeling the Dynamics of PDE Systems with Physics-Constrained Deep Auto-Regressive Networks

Super-resolution and denoising of fluid flow using physics-informed convolutional neural networks without high-resolution labels

This work presents a novel physics-informed DL-based SR solution using convolutional neural networks (CNN), which is able to produce HR flow fields from low-resolution (LR) inputs in high-dimensional parameter space by leveraging the conservation laws and boundary conditions of fluid flows.

A Tutorial on Energy-Based Learning

The EBM approach provides a common theoretical framework for many learning models, including traditional discr iminative and generative approaches, as well as graph-transformer networks, co nditional random fields, maximum margin Markov networks, and several manifold learning methods.

Training energy-based models for time-series imputation

This paper presents a strategy for training energy-based graphical models for imputation directly, bypassing difficulties probabilistic approaches would face and finds that the training methods outperform the Contrastive Divergence learning algorithm.

Composing graphical models with neural networks for structured representations and fast inference

A general modeling and inference framework that composes probabilistic graphical models with deep learning methods and combines their respective strengths is proposed, giving a scalable algorithm that leverages stochastic variational inference, natural gradients, graphical model message passing, and the reparameterization trick.