Bootstrap Methods: Another Look at the Jackknife

  title={Bootstrap Methods: Another Look at the Jackknife},
  author={Bradley Efron},
  journal={Annals of Statistics},
  • B. Efron
  • Published 1979
  • Mathematics
  • Annals of Statistics
We discuss the following problem given a random sample X = (X 1, X 2,…, X n) from an unknown probability distribution F, estimate the sampling distribution of some prespecified random variable R(X, F), on the basis of the observed data x. (Standard jackknife theory gives an approximate mean and variance in the case R(X, F) = \(\theta \left( {\hat F} \right) - \theta \left( F \right)\), θ some parameter of interest.) A general method, called the “bootstrap”, is introduced, and shown to work… 

Data-driven selection of regressors and the bootstrap

Consider the classical linear model with n observations, a fixed design matrix X, and i.i.d. Gaussian residuals with zero mean and positive variance. Suppose it is believed that some of the columns

A Bootstrap Procedure for Estimating the Lundberg Coefficient

Efron (1979) introduced a new resampling method, the bootstrap: Let X n = (X1,...,X n ) be a given sample of i.i.d. random variables (r.v. s) with distribution function (d.f.) F. From X n construct

Bootstrapping the Kaplan—Meier Estimator

Abstract Randomly censored data consist of iid pairs of observations (Xi, δi), i = 1, …, n; if δ i = 0, Xi denotes a censored observation, and if δ i = 1, Xi denotes an exact “survival” time, which

Richardson Extrapolation and the Bootstrap

Abstract Simulation methods [particularly Efron's (1979) bootstrap] are being applied more and more frequently in statistical inference. Given data (X 1 …, Xn ) distributed according to P, which

A note on proving that the (modified) bootstrap works

Let be a sample of independent, identically distributed (i.i.d.) random variables with common distribution function F and suppose is a bootstrap sample of i.i.d. random variables from the empirical

Bootstrap choice of tuning parameters

AbstractConsider the problem of estimating θ=θ(P) based on dataxn from an unknown distributionP. Given a family of estimatorsTn, β of θ(P), the goal is to choose β among β∈I so that the resulting

19 Bootstrap methodology

Nonparametric Bootstrap Tests: Some Applications

In a series of papers Beran (1984, 1986, 1988) proposed bootstrap techniques for hypothesis testing. These tests are concerned with the following situation. Let {X 1, X 2,…, X n} be an i.i.d. sample

The Efficiency and Consistency of Approximations to the Jackknife Variance Estimators

Abstract The problem considered is the computation reduction for general delete-d jackknife variance estimators. The delete-d jackknife estimator was proved consistent (Shao and Wu 1986), and in this



Error Analysis by Replaced Samples

BY error analysis of a computation t on data d, we mean an assessment of the possible values of t which might be obtained for various hypothetical data d* similar to d. For example, if d is supposed

An Introduction to Multivariate Statistical Analysis

Preface to the Third Edition.Preface to the Second Edition.Preface to the First Edition.1. Introduction.2. The Multivariate Normal Distribution.3. Estimation of the Mean Vector and the Covariance

On the generalized jackknife and its relation to statistical differentials

SUMMARY The flexibility of the definition of the first-order generalized jackknife is exploited so that its relation to the method of statistical differentials can be seen. The estimators presented

Using Subsample Values as Typical Values

Abstract The subsample values of a statistic t are the values of t for subsets of the whole sample. Subsample values may be used as indicators of variability of t. For real valued statistics t,

Estimation of Error Rates in Discriminant Analysis

Several methods of estimating error rates in Discriminant Analysis are evaluated by sampling methods. Multivariate normal samples are generated on a computer which have various true probabilities of

The jackknife-a review

SUMMARY Research on the jackknife technique since its introduction by Quenouille and Tukey is reviewed. Both its role in bias reduction and in robust interval estimation are treated. Some

Bibliography on estimation of misclassification

Articles, books, and technical reports on the theoretical and experimental estimation of probability of misclassification are listed for the case of correctly labeled or preclassified training data.

The advanced theory of statistics

A method for continuously effecting reactions in a liquid phase in the presence of a gas and of a finely divided solid catalyst in a bubble column-cascade reactor with little or no liquid

A Note on Estimating the Variance of the Sample Median

Abstract Estimation of the variance of the sample median based on small samples is discussed, and short tables are provided to facilitate calculation of the estimates.