• Corpus ID: 221761132

Integration of AI and mechanistic modeling in generative adversarial networks for stochastic inverse problems

@article{Parikh2020IntegrationOA,
  title={Integration of AI and mechanistic modeling in generative adversarial networks for stochastic inverse problems},
  author={Jaimit Parikh and J. Kozloski and V. Gurev},
  journal={ArXiv},
  year={2020},
  volume={abs/2009.08267}
}
The problem of finding distributions of input parameters for deterministic mechanistic models to match distributions of model outputs to stochastic observations, i.e., the "Stochastic Inverse Problem" (SIP), encompasses a range of common tasks across a variety of scientific disciplines. Here, we demonstrate that SIP could be reformulated as a constrained optimization problem and adapted for applications in intervention studies to simultaneously infer model input parameters for two sets of… 

Figures from this paper

Parameter Estimation with Dense and Convolutional Neural Networks Applied to the FitzHugh-Nagumo ODE

TLDR
This work employs deep neural networks using dense and convolutional layers to solve an inverse problem, where they seek to estimate parameters in a FitzHugh-Nagumo model, which consists of a nonlinear system of ordinary differential equations (ODEs).

GATSBI: Generative Adversarial Training for Simulation-Based Inference

TLDR
GATSBI opens up opportunities for leveraging advances in GANs to perform Bayesian inference on high-dimensional simulation-based models, and shows how GATSBI can be extended to perform sequential posterior estimation to focus on individual observations.

Two heads are better than one: current landscape of integrating QSP and machine learning

TLDR
The integration of QSP and ML is still in its early stages of moving from evaluating available technical tools to building case studies, and serves as a foundation for future codification of best practices.

Review of applications and challenges of quantitative systems pharmacology modeling and machine learning for heart failure

TLDR
The combination of ML/DL and QSP modeling becomes an emergent direction in the understanding of HF and clinical development new therapies, and remaining challenges and future perspectives in the field are discussed.

References

SHOWING 1-10 OF 38 REFERENCES

BayesFlow: Learning Complex Stochastic Models With Invertible Neural Networks

TLDR
It is argued that BayesFlow provides a general framework for building amortized Bayesian parameter estimation machines for any forward model from which data can be simulated and is applicable to modeling scenarios where standard inference techniques with handcrafted summary statistics fail.

Flow-GAN: Bridging implicit and prescribed learning in generative models

TLDR
This work proposes Flow-GANs, a generative adversarial network with the generator specified as a normalizing flow model which can perform exact likelihood evaluation and shows empirically the benefits of Flow-gans on MNIST and CIFAR-10 datasets in learning generative models that can attain low generalization error based on the log-likelihoods and generate high quality samples.

Inference for Deterministic Simulation Models: The Bayesian Melding Approach

TLDR
A modified approach is proposed, called Bayesian melding, which takes into full account information and uncertainty about both inputs and outputs to the model, while avoiding the Borel paradox and is implemented here by posterior simulation using the sampling-importance-resampling (SIR) algorithm.

Training deep neural density estimators to identify mechanistic models of neural dynamics

TLDR
A machine learning tool that uses density estimators based on deep neural networks— trained using model simulations—to infer data-compatible parameters for a wide range of mechanistic models will help close the gap between data-driven and theory-driven models of neural dynamics.

VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning

TLDR
VEEGAN is introduced, which features a reconstructor network, reversing the action of the generator by mapping from data to noise, and resists mode collapsing to a far greater extent than other recent GAN variants, and produces more realistic samples.

Prescribed Generative Adversarial Networks

TLDR
The prescribed GAN (PresGAN) is developed and found they mitigate mode collapse and generate samples with high perceptual quality and reduce the gap in performance in terms of predictive log-likelihood between traditional GANs and variational autoencoders (VAEs).

f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization

TLDR
It is shown that any f-divergence can be used for training generative neural samplers and the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models are discussed.

Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models

TLDR
Flow-GANs is proposed, a generative adversarial network for which one can perform exact likelihood evaluation, thus supporting both adversarial and maximum likelihood training and demonstrating that hybrid training can attain high held-out likelihoods while retaining visual fidelity in the generated samples.

Adam: A Method for Stochastic Optimization

TLDR
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.

Stabilizing Training of Generative Adversarial Networks through Regularization

TLDR
This work proposes a new regularization approach with low computational cost that yields a stable GAN training procedure and demonstrates the effectiveness of this regularizer accross several architectures trained on common benchmark image generation tasks.