Longitudinal Deep Kernel Gaussian Process Regression

@article{Liang2021LongitudinalDK,
  title={Longitudinal Deep Kernel Gaussian Process Regression},
  author={Junjie Liang and Yanting Wu and Dongkuan Xu and Vasant G Honavar},
  journal={ArXiv},
  year={2021},
  volume={abs/2005.11770}
}
Gaussian processes offer an attractive framework for predictive modeling from longitudinal data, \ie irregularly sampled, sparse observations from a set of individuals over time. However, such methods have two key shortcomings: (i) They rely on ad hoc heuristics or expensive trial and error to choose the effective kernels, and (ii) They fail to handle multilevel correlation structure in the data. We introduce Longitudinal deep kernel Gaussian process regression (L-DKGPR) to overcome these… 

Improving the pulsed neutron-gamma density method with machine learning regression algorithms

EDGE: Explaining Deep Reinforcement Learning Policies

TLDR
A novel self-explainable model is proposed that augments a Gaussian process with a customized kernel function and an interpretable predictor and can predict an agent’s final rewards from its game episodes and extract time step importance within episodes as strategy-level explanations for that agent.

Feeding the machine: Challenges to reproducible predictive modeling in resting-state connectomics

TLDR
This critical review of the application of predictive models, e.g. classifiers, trained using Machine Learning (ML) to assist in interpretation of functional neuroimaging data covers 250 studies published using ML and resting-state functional MRI to infer various dimensions of the human functional connectome.

SrVARM: State Regularized Vector Autoregressive Model for Joint Learning of Hidden State Transitions and State-Dependent Inter-Variable Dependencies from Multi-variate Time Series

TLDR
The State-Regularized Vector Autoregressive Model (SrVARM) is introduced which combines a state-regularized recurrent neural network to learn the dynamics of transitions between discrete hidden states with an augmented autoregressive model which models the inter-variable dependencies in each state using astate-dependent directed acyclic graph (DAG).

SrVARM: State Regularized Vector Autoregressive Model for Joint Learning of Hidden State Transitions and State-Dependent Inter-Variable Dependencies from Time Series Data

TLDR
The State-Regularized Vector Autoregressive Model (SrVARM) is introduced which combines a state-regularized recurrent neural network to learn the dynamics of transitions between discrete hidden states with an augmented autoregressive model which models the inter-variable dependencies in each state using astate-dependent directed acyclic graph (DAG).

Functional Autoencoders for Functional Data Representation Learning

TLDR
Function autoencoders are proposed, which generalize neural network autoen coders so as to learn non-linear representations of functional data, and derive from first principles, a functional gradient based algorithm for training functional autoenCoders.

Explainable Multivariate Time Series Classification: A Deep Neural Network Which Learns to Attend to Important Variables As Well As Time Intervals

TLDR
This work introduces a novel, modular, convolution-based feature extraction and attention mechanism that simultaneously identifies the variables as well as time intervals which determine the classifier output.

References

SHOWING 1-10 OF 39 REFERENCES

LMLFM: Longitudinal Multi-Level Factorization Machine

TLDR
Results of experiments are presented which show that LMLFM outperforms the state-of-the-art methods in terms of predictive accuracy, variable selection ability, and scalability to data with large number of variables.

Stochastic Variational Deep Kernel Learning

TLDR
An efficient form of stochastic variational inference is derived which leverages local kernel interpolation, inducing points, and structure exploiting algebra within this framework to enable classification, multi-task learning, additive covariance structures, and Stochastic gradient training.

Bayesian Nonparametric Longitudinal Data Analysis

TLDR
A novel statistical model is developed that generalizes standard mixed models for longitudinal data that include flexible mean functions as well as combined compound symmetry (CS) and autoregressive (AR) covariance structures.

Deep Kernel Learning

We introduce scalable deep kernels, which combine the structural properties of deep learning architectures with the non-parametric flexibility of kernel methods. Specifically, we transform the inputs

An additive Gaussian process regression model for interpretable non-parametric analysis of longitudinal data

TLDR
LonGP is presented, an additive Gaussian process regression model specifically designed for statistical analysis of longitudinal experimental data that can model time-varying random effects and non-stationary signals, incorporate multiple kernel learning, and provide interpretable results for the effects of individual covariates and their interactions.

LMLFM: Longitudinal Multi-Level Factorization Machines

TLDR
Experimental results show that LMLFM outperforms the state-of-the-art longitudinal methods in terms of prediction accuracy with significantly lower false positive, using substantially less computational resources.

Kernel Interpolation for Scalable Structured Gaussian Processes (KISS-GP)

TLDR
A new structured kernel interpolation (SKI) framework is introduced, which generalises and unifies inducing point methods for scalable Gaussian processes (GPs) and naturally enables Kronecker and Toeplitz algebra for substantial additional gains in scalability.

Gaussian Processes for Machine Learning

TLDR
The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics, and deals with the supervised learning problem for both regression and classification.

When Gaussian Process Meets Big Data: A Review of Scalable GPs

TLDR
This article is devoted to reviewing state-of-the-art scalable GPs involving two main categories: global approximations that distillate the entire data and local approximation that divide the data for subspace learning.

An interpretable probabilistic machine learning method for heterogeneous longitudinal studies

TLDR
It is demonstrated that the additive Gaussian process regression method outperforms previous longitudinal modeling approaches and provides useful novel features, including the ability to account for uncertainty in disease effect times as well as heterogeneity in their effects.