Multiphase flow applications of nonintrusive reduced-order models with Gaussian process emulation

  title={Multiphase flow applications of nonintrusive reduced-order models with Gaussian process emulation},
  author={Themistoklis Botsas and Indranil Pan and Lachlan Robert Mason and Omar K. Matar},
  journal={Data-Centric Engineering},
Abstract Reduced-order models (ROMs) are computationally inexpensive simplifications of high-fidelity complex ones. Such models can be found in computational fluid dynamics where they can be used to predict the characteristics of multiphase flows. In previous work, we presented a ROM analysis framework that coupled compression techniques, such as autoencoders, with Gaussian process regression in the latent space. This pairing has significant advantages over the standard encoding–decoding… 
1 Citations

An AI-based Domain-Decomposition Non-Intrusive Reduced-Order Model for Extended Domains applied to Multiphase Flow in Pipes

This paper presents a new AI-based non-intrusive reduced-order model within a domain decomposition framework (AI-DDNIROM), which is capable of making predictions for domains significantly larger than the domain used in training.



A Deep Learning based Approach to Reduced Order Modeling for Turbulent Flow Control using LSTM Neural Networks

A deep learning based approach is demonstrated to build a ROM using the POD basis of canonical DNS datasets, for turbulent flow control applications and finds that a type of Recurrent Neural Network, the Long Short Term Memory (LSTM) shows attractive potential in modeling temporal dynamics of turbulence.

Deep Fluids: A Generative Network for Parameterized Fluid Simulations

The proposed generative model is optimized for fluids by a novel loss function that guarantees divergence‐free velocity fields at all times, thus enabling applications such as fast construction of simulations, interpolation of fluids with different parameters, time re‐sampling, latent space simulations, and compression of fluid simulation data.

Data-driven discretization: machine learning for coarse graining of partial differential equations

This work introduces data driven discretization, a method for learning optimized approximations to PDEs based on actual solutions to the known underlying equations that are optimized end-to-end to best satisfy the equations on a low resolution grid.

Deep convolutional recurrent autoencoders for learning low-dimensional feature dynamics of fluid systems

This work proposes a deep learning-based strategy for nonlinear model reduction that is inspired by projection-based model reduction where the idea is to identify some optimal low-dimensional representation and evolve it in time.

Learning data-driven discretizations for partial differential equations

Data-driven discretization is proposed, a method for learning optimized approximations to PDEs based on actual solutions to the known underlying equations that uses neural networks to estimate spatial derivatives.

Numerical simulation, clustering, and prediction of multicomponent polymer precipitation

This work uses a modified Cahn–Hilliard model to simulate polymer precipitation and applies machine learning techniques for clustering and consequent prediction of the simulated polymer-blend images in conjunction with simulations to reduce the required computational costs.

GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration

This work presents an efficient and general approach to GP inference based on Blackbox Matrix-Matrix multiplication (BBMM), a modified batched version of the conjugate gradients algorithm to derive all terms for training and inference in a single call.

Doubly Stochastic Variational Inference for Deep Gaussian Processes

This work presents a doubly stochastic variational inference algorithm, which does not force independence between layers in Deep Gaussian processes, and provides strong empirical evidence that the inference scheme for DGPs works well in practice in both classification and regression.

Deep Gaussian Processes

Deep Gaussian process (GP) models are introduced and model selection by the variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples.