Gradient boosting for linear mixed models

@article{Griesbach2021GradientBF,
  title={Gradient boosting for linear mixed models},
  author={Colin Griesbach and Benjamin S{\"a}fken and Elisabeth Waldmann},
  journal={The International Journal of Biostatistics},
  year={2021},
  volume={17},
  pages={317 - 329}
}
Abstract Gradient boosting from the field of statistical learning is widely known as a powerful framework for estimation and selection of predictor effects in various regression models by adapting concepts from classification theory. Current boosting approaches also offer methods accounting for random effects and thus enable prediction of mixed models for longitudinal and clustered data. However, these approaches include several flaws resulting in unbalanced effect selection with falsely… 

Figures and Tables from this paper

Bayesian Boosting for Linear Mixed Models
TLDR
A new inference method “BayesBoost” is proposed that combines boosting and Bayesian for linear mixed models to make the uncertainty estimation for the random effects possible on the one hand and overcomes the shortcomings of Bayesian inference in giving precise and unambiguous guidelines for the selection of covariates by benefiting from boosting techniques.
Latent Gaussian Model Boosting
  • Fabio Sigrist
  • Computer Science
    IEEE transactions on pattern analysis and machine intelligence
  • 2022
TLDR
This article introduces a novel approach that combines boosting and latent Gaussian models in order to remedy the above-mentioned drawbacks and to leverage the advantages of both techniques.
Joint Modelling Approaches to Survival Analysis via Likelihood-Based Boosting Techniques
TLDR
The algorithm represents a novel boosting approach allowing for time-dependent covariates in survival analysis and in addition offers variable selection for joint models, which is evaluated via simulations and real world application modelling CD4 cell counts of patients infected with human immunodeficiency virus (HIV).
Addressing cluster-constant covariates in mixed effects models via likelihood-based boosting techniques
TLDR
This work proposes an improved boosting algorithm for linear mixed models, where the random effects are properly weighted, disentangled from the fixed effects updating scheme and corrected for correlations with cluster-constant covariates in order to improve quality of estimates and in addition reduce the computational effort.

References

SHOWING 1-10 OF 55 REFERENCES
Addressing cluster-constant covariates in mixed effects models via likelihood-based boosting techniques
TLDR
This work proposes an improved boosting algorithm for linear mixed models, where the random effects are properly weighted, disentangled from the fixed effects updating scheme and corrected for correlations with cluster-constant covariates in order to improve quality of estimates and in addition reduce the computational effort.
BOOSTING ALGORITHMS: REGULARIZATION, PREDICTION AND MODEL FITTING
We present a statistical perspective on boosting. Special emphasis is given to estimating potentially complex parametric or nonparametric models, including generalized linear and additive models as
Regression Shrinkage and Selection via the Lasso
TLDR
A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
On the behaviour of marginal and conditional AIC in linear mixed models
In linear mixed models, model selection frequently includes the selection of random effects. Two versions of the Akaike information criterion, aic , have been used, based either on the marginal or on
Approximate inference in generalized linear mixed models
Statistical approaches to overdispersion, correlated errors, shrinkage estimation, and smoothing of regression relationships may be encompassed within the framework of the generalized linear mixed
The evolution of boosting algorithms. From machine learning to statistical modelling.
TLDR
Statistical boosting algorithms have gained substantial interest during the last decade and offer a variety of options to address important research questions in modern biomedicine.
Smoothing parameter selection in nonparametric regression using an improved Akaike information criterion
Many different methods have been proposed to construct nonparametric estimates of a smooth regression function, including local polynomial, (convolution) kernel and smoothing spline estimators. Each
Likelihood ratio tests in linear mixed models with one variance component
Summary.  We consider the problem of testing null hypotheses that include restrictions on the variance component in a linear mixed model with one variance component and we derive the finite sample
Fitting Linear Mixed-Effects Models Using lme4
Maximum likelihood or restricted maximum likelihood (REML) estimates of the parameters in linear mixed-effects models can be determined using the lmer function in the lme4 package for R. As for most
The importance of knowing when to stop. A sequential stopping rule for component-wise gradient boosting.
TLDR
The newly developed sequential stopping rule improved purely AIC-based methods when used for the microarray-based prediction of the recurrence of metastases for stage II colon cancer patients and outperformed earlier approaches if applied to both simulated and real data.
...
...