Learn More
BACKGROUND When predictive survival models are built from high-dimensional data, there are often additional covariates, such as clinical scores, that by all means have to be included into the final model. While there are several techniques for the fitting of sparse high-dimensional survival models by penalized parameter estimation, none allows for explicit(More)
Statistical heterogeneity and small-study effects are 2 major issues affecting the validity of meta-analysis. In this article, we introduce the concept of a limit meta-analysis, which leads to shrunken, empirical Bayes estimates of study effects after allowing for small-study effects. This in turn leads to 3 model-based adjusted pooled treatment-effect(More)
MOTIVATION For analyzing high-dimensional time-to-event data with competing risks, tailored modeling techniques are required that consider the event of interest and the competing events at the same time, while also dealing with censoring. For low-dimensional settings, proportional hazards models for the subdistribution hazard have been proposed, but an(More)
MOTIVATION In the process of developing risk prediction models, various steps of model building and model selection are involved. If this process is not adequately controlled, overfitting may result in serious overoptimism leading to potentially erroneous conclusions. METHODS For right censored time-to-event data, we estimate the prediction error for(More)
BACKGROUND The heterogeneity statistic I(2), interpreted as the percentage of variability due to heterogeneity between studies rather than sampling error, depends on precision, that is, the size of the studies included. METHODS Based on a real meta-analysis, we simulate artificially 'inflating' the sample size under the random effects model. For a given(More)
In survival analysis with censored data the mean squared error of prediction can be estimated by weighted averages of time-dependent residuals. Graf et al. (1999) suggested a robust weighting scheme based on the assumption that the censoring mechanism is independent of the covariates. We show consistency of the estimator. Furthermore, we show that a(More)
The bootstrap is a tool that allows for efficient evaluation of prediction performance of statistical techniques without having to set aside data for validation. This is especially important for high-dimensional data, e.g., arising from microarrays, because there the number of observations is often limited. For avoiding overoptimism the statistical(More)
Competing risks analysis considers time-to-first-event ('survival time') and the event type ('cause'), possibly subject to right-censoring. The cause-, i.e. event-specific hazards, completely determine the competing risk process, but simulation studies often fall back on the much criticized latent failure time model. Cause-specific hazard-driven simulation(More)
BACKGROUND There are several techniques for fitting risk prediction models to high-dimensional data, arising from microarrays. However, the biological knowledge about relations between genes is only rarely taken into account. One recent approach incorporates pathway information, available, e.g., from the KEGG database, by augmenting the penalty term in(More)
Estimates of the prediction error play an important role in the development of statistical methods and models, and in their applications. We adapt the resampling tools of Efron and Tibshirani (1997, Journal of the American Statistical Association92, 548-560) to survival analysis with right-censored event times. We find that flexible rules, like artificial(More)