Bootstrap Methods and Their Application

  title={Bootstrap Methods and Their Application},
  author={Debashis Kushary},
  pages={216 - 217}
sive models further, It gives additional illustrations of the performance of selection criteria in small samples (e.g., with respect to several varying characteristics that include sample size and parameter structure), in large samples (e.g., with respect to the degree of overfitting allowed), and on real data. This book looks at many selection criteria for a wide range of models, from traditional models to more recently developed classes of models. Currently, however, there appear to be few… 

Topics from this paper

Asymptotics of the Bootstrap via Stability with Applications to Inference with Model Selection
One of the most commonly used methods for forming confidence intervals is the empirical bootstrap, which is especially expedient when the limiting distribution of the estimator is unknown. However,
Bootstrapping Regression Models
Bootstrapping is a general approach to statistical inference based on building a sampling distribution for a statistic by resampling from the data at hand. The term ‘bootstrapping,’ due to Efron
Sharpening Wald-type inference in robust regression for small samples
A simulation study shows and a real data example illustrates that the SMDM-estimate has better performance for small n/p and that the use of the new scale estimate and of a slowly redescending @j-function is crucial for adequate inference.
Data-Driven Model Evaluation: A Test for Revealed Performance
When comparing two competing approximate models using a particular loss function, the one having smallest `expected true error' for that loss function is expected to lie closest to the underlying
Robust Bootstrap with Non Random Weights Based on the Influence Function
Abstract The existence of outliers in a sample is an obvious problem which can become worse when the usual bootstrap is applied, because some resamples may have a higher contamination level than the
On the choice of the number of Monte Carlo iterations and bootstrap replicates in Empirical Best Prediction
Empirical Best Predictors (EBPs) are widely used for small area estimation purposes. In the case of longitudinal surveys, this class of predictors can be used to predict any given population or
hoa: An R Package Bundle for Higher Order Likelihood Inference
The likelihood function represents the basic ingredient of many commonly used statistical methods for estimation, testing and the calculation of confidence intervals. In practice, much application of
Estimating the Kullback–Liebler risk based on multifold cross-validation
type="main" xml:id="stan12070-abs-0001"> This paper concerns a class of model selection criteria based on cross-validation techniques and estimative predictive densities. Both the simple or
Location-adjusted Wald statistics for scalar parameters
A novel, algebraic adjustment to the Wald statistic is proposed, delivering significant improvements in inferential performance with only small implementation and computational overhead, predominantly due to additional matrix multiplications.
Semi-supervised multiple testing
An important limitation of standard multiple testing procedures is that the null distribution should be known. Here, we consider a null distribution-free approach for multiple testing in the


Tools for Statistical Inference (3’d ed.), New York: Springer-Verlag
  • Bootstrap Methods and Their Application,
  • 1996
Buy&an Data Analysis
  • 1995