Prospects of federated machine learning in fluid dynamics

  title={Prospects of federated machine learning in fluid dynamics},
  author={Omer San and Suraj Pawar and Adil Rasheed},
Physics-based models have been mainstream in fluid dynamics for developing predictive models. In recent years, machine learning has offered a renaissance to the fluid community due to the rapid developments in data science, processing units, neural network based technologies, and sensor adaptations. So far in many applications in fluid dynamics, machine learning approaches have been mostly focused on a standard process that requires centralizing the training data on a designated machine or in a… 

Figures from this paper



Federated Optimization in Heterogeneous Networks

This work introduces a framework, FedProx, to tackle heterogeneity in federated networks, and provides convergence guarantees for this framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work.

Nonlinear proper orthogonal decomposition for convection-dominated flows

A nonlinear proper orthogonal decomposition (POD) framework, which is an end-to-end Galerkin-free model combining autoencoders with long short-term memory networks for dynamics, which not only improves the accuracy, but also significantly reduces the computational cost of training and testing.

Deep learning to discover and predict dynamics on an inertial manifold

A data-driven framework is developed to represent chaotic dynamics on an inertial manifold and applied to solutions of the Kuramoto-Sivashinsky equation, reproducing very well key dynamic and statistical features of the attractor.

Personalized Federated Learning: A Meta-Learning Approach

A personalized variant of the well-known Federated Averaging algorithm is studied and its performance is characterized by the closeness of underlying distributions of user data, measured in terms of distribution distances such as Total Variation and 1-Wasserstein metric.

Adaptive Personalized Federated Learning

Information theoretically, it is proved that the mixture of local and global models can reduce the generalization error and a communication-reduced bilevel optimization method is proposed, which reduces the communication rounds to $O(\sqrt{T})$ and can achieve a convergence rate of $O(1/T)$ with some residual error.

Fair Resource Allocation in Federated Learning

This work proposes q-Fair Federated Learning (q-FFL), a novel optimization objective inspired by fair resource allocation in wireless networks that encourages a more fair accuracy distribution across devices in federated networks.

Memory embedded non-intrusive reduced order modeling of non-ergodic flows

A long short-term memory (LSTM) neural network architecture together with a principal interval decomposition (PID) framework as an enabler to account for localized modal deformation, which is a key element in accurate reduced order modeling of convective flows.

Communication-Efficient Learning of Deep Networks from Decentralized Data

This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.

Digital Twin: Values, Challenges and Enablers From a Modeling Perspective

This work reviews the recent status of methodologies and techniques related to the construction of digital twins mostly from a modeling perspective to provide a detailed coverage of the current challenges and enabling technologies along with recommendations and reflections for various stakeholders.