Generative Modeling of Turbulence

  title={Generative Modeling of Turbulence},
  author={Claudia Drygala and Benjamin Winhart and Francesca di Mare and Hanno Gottschalk},
We present a mathematically well-founded approach for the synthetic modeling of turbulent flows using generative adversarial networks (GAN). Based on the analysis of chaotic, deterministic systems in terms of ergodicity, we outline a mathematical proof that GAN can actually learn to sample state snapshots from the invariant measure of the chaotic system. Based on this analysis, we study a hierarchy of chaotic systems starting with the Lorenz attractor and then carry on to the modeling of… 

Investigation of nonlocal data-driven methods for subgrid-scale stress modeling in large eddy simulation

A nonlocal subgrid-scale stress (SGS) model is developed based on the convolution neural network (CNN), which is a powerful supervised data-driven method and also an ideal approach to naturally

Constraining Gaussian Processes to Systems of Linear Ordinary Differential Equations

A novel algorithmic and symbolic construction for covariance functions of Gaussian Processes (GPs) with realizations strictly following a system of linear homogeneous ODEs with constant coefficients, which is called LODE-GPs.



Deep unsupervised learning of turbulence for inflow generation at various Reynolds numbers

From Deep to Physics-Informed Learning of Turbulence: Diagnostics

Tests validating progress made toward acceleration and automation of hydrodynamic codes in the regime of developed turbulence by three Deep Learning (DL) Neural Network (NN) schemes trained on Direct Numerical Simulations of turbulence suggest a path forward towards improving reproducibility of the large-scale geometry of turbulence with NN.

Turbulence Enrichment using Physics-informed Generative Adversarial Networks

This work develops physics-based methods for generative enrichment of turbulence enrichment by incorporating a physics-informed learning approach by a modification to the loss function to minimize the residuals of the governing equations for the generated data.

Unsupervised deep learning for super-resolution reconstruction of turbulence

An unsupervised learning model that adopts a cycle-consistent generative adversarial network (CycleGAN) that can be trained with unpaired turbulence data for super-resolution reconstruction of turbulent fields is presented.

Machine-learning-based spatio-temporal super resolution reconstruction of turbulent flows

The present model reconstructs high-resolved turbulent flows from very coarse input data in space, and also reproduces the temporal evolution for appropriately chosen time interval, suggesting that the present method can perform a range of flow reconstructions in support of computational and experimental efforts.

Machine learning methods for turbulence modeling in subsonic flows around airfoils

Reynolds-Averaged Navier-Stokes(RANS) method will still play a vital role in the following several decade in aerospace engineering. Although RANS models are widely used, empiricism and large

A Convenient Infinite Dimensional Framework for Generative Adversarial Learning

This work proposes an infinite dimensional theoretical framework for generative adversarial learning and shows that the Rosenblatt transformation induces an optimal generator, which is realizable in the hypothesis space of $\alpha$-Holder differentiable generators.

Machine Learning-augmented Predictive Modeling of Turbulent Separated Flows over Airfoils

By incorporating data that can reveal the form of the innate model discrepancy, the applicability of data-driven turbulence models can be extended to more general flows.

Super-resolution reconstruction of turbulent flows with machine learning

We use machine learning to perform super-resolution analysis of grossly under-resolved turbulent flow field data to reconstruct the high-resolution flow field. Two machine learning models are

Reynolds averaged turbulence modelling using deep neural networks with embedded invariance

This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data and proposes a novel neural network architecture which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropic tensor.