Multiphase flow applications of nonintrusive reduced-order models with Gaussian process emulation
@article{Botsas2021MultiphaseFA, title={Multiphase flow applications of nonintrusive reduced-order models with Gaussian process emulation}, author={Themistoklis Botsas and Indranil Pan and Lachlan Robert Mason and Omar K. Matar}, journal={Data-Centric Engineering}, year={2021}, volume={3} }
Abstract Reduced-order models (ROMs) are computationally inexpensive simplifications of high-fidelity complex ones. Such models can be found in computational fluid dynamics where they can be used to predict the characteristics of multiphase flows. In previous work, we presented a ROM analysis framework that coupled compression techniques, such as autoencoders, with Gaussian process regression in the latent space. This pairing has significant advantages over the standard encoding–decoding…
One Citation
An AI-based Domain-Decomposition Non-Intrusive Reduced-Order Model for Extended Domains applied to Multiphase Flow in Pipes
- Computer Science, EngineeringPhysics of Fluids
- 2022
This paper presents a new AI-based non-intrusive reduced-order model within a domain decomposition framework (AI-DDNIROM), which is capable of making predictions for domains significantly larger than the domain used in training.
References
SHOWING 1-10 OF 27 REFERENCES
Latent-space time evolution of non-intrusive reduced-order models using Gaussian process emulation
- Computer ScienceArXiv
- 2020
A Deep Learning based Approach to Reduced Order Modeling for Turbulent Flow Control using LSTM Neural Networks
- Computer Science
- 2018
A deep learning based approach is demonstrated to build a ROM using the POD basis of canonical DNS datasets, for turbulent flow control applications and finds that a type of Recurrent Neural Network, the Long Short Term Memory (LSTM) shows attractive potential in modeling temporal dynamics of turbulence.
Deep Fluids: A Generative Network for Parameterized Fluid Simulations
- Computer ScienceComput. Graph. Forum
- 2019
The proposed generative model is optimized for fluids by a novel loss function that guarantees divergence‐free velocity fields at all times, thus enabling applications such as fast construction of simulations, interpolation of fluids with different parameters, time re‐sampling, latent space simulations, and compression of fluid simulation data.
Data-driven discretization: machine learning for coarse graining of partial differential equations
- Computer Science
- 2018
This work introduces data driven discretization, a method for learning optimized approximations to PDEs based on actual solutions to the known underlying equations that are optimized end-to-end to best satisfy the equations on a low resolution grid.
Deep convolutional recurrent autoencoders for learning low-dimensional feature dynamics of fluid systems
- Computer ScienceArXiv
- 2018
This work proposes a deep learning-based strategy for nonlinear model reduction that is inspired by projection-based model reduction where the idea is to identify some optimal low-dimensional representation and evolve it in time.
Learning data-driven discretizations for partial differential equations
- Computer ScienceProceedings of the National Academy of Sciences
- 2019
Data-driven discretization is proposed, a method for learning optimized approximations to PDEs based on actual solutions to the known underlying equations that uses neural networks to estimate spatial derivatives.
Numerical simulation, clustering, and prediction of multicomponent polymer precipitation
- Materials Science, Computer ScienceData-Centric Engineering
- 2020
This work uses a modified Cahn–Hilliard model to simulate polymer precipitation and applies machine learning techniques for clustering and consequent prediction of the simulated polymer-blend images in conjunction with simulations to reduce the required computational costs.
GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration
- Computer ScienceNeurIPS
- 2018
This work presents an efficient and general approach to GP inference based on Blackbox Matrix-Matrix multiplication (BBMM), a modified batched version of the conjugate gradients algorithm to derive all terms for training and inference in a single call.
Doubly Stochastic Variational Inference for Deep Gaussian Processes
- Computer ScienceNIPS
- 2017
This work presents a doubly stochastic variational inference algorithm, which does not force independence between layers in Deep Gaussian processes, and provides strong empirical evidence that the inference scheme for DGPs works well in practice in both classification and regression.
Deep Gaussian Processes
- Computer ScienceAISTATS
- 2013
Deep Gaussian process (GP) models are introduced and model selection by the variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples.