# The Highly Adaptive Lasso Estimator

@article{Benkeser2016TheHA, title={The Highly Adaptive Lasso Estimator}, author={David C. Benkeser and Mark J. van der Laan}, journal={2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA)}, year={2016}, pages={689-696} }

Estimation of a regression functions is a common goal of statistical learning. We propose a novel nonparametric regression estimator that, in contrast to many existing methods, does not rely on local smoothness assumptions nor is it constructed using local smoothing techniques. Instead, our estimator respects global smoothness constraints by virtue of falling in a class of right-hand continuous functions with left-hand limits that have variation norm bounded by a constant. Using empirical…

## 81 Citations

### The Selectively Adaptive Lasso

- Computer ScienceArXiv
- 2022

This paper builds upon the theory of HAL to construct the Selectively Adaptive Lasso (SAL), a new algorithm which retains HAL’s dimension-free, nonparametric convergence rate but which also scales computationally to massive datasets.

### Universal sieve-based strategies for efficient estimation using machine learning tools.

- Mathematics, Computer ScienceBernoulli : official journal of the Bernoulli Society for Mathematical Statistics and Probability
- 2021

Inspired by sieve estimators, two novel universal approaches for estimating function-valued features that can be analyzed using sieve estimation theory are proposed that are valid under more general conditions on the smoothness of the function- valued features by utilizing flexible estimates that could be obtained using machine learning.

### A Nonparametric Super-Efficient Estimator of the Average Treatment Effect

- Mathematics, Economics
- 2019

Doubly robust estimators of causal effects are a popular means of estimating causal effects. Such estimators combine an estimate of the conditional mean of the outcome given treatment and confounders…

### Robust inference on the average treatment effect using the outcome highly adaptive lasso

- Economics, MathematicsBiometrics
- 2019

This work proposes a more flexible alternative that it calls the outcome highly adaptive lasso, a penalized regression technique for estimating the propensity score that seeks to minimize the impact of instrumental variables on treatment effect estimators.

### Nonparametric Bootstrap Inference for the Targeted Highly Adaptive LASSO Estimator.

- Mathematics
- 2019

The Highly-Adaptive-LASSO Targeted Minimum Loss Estimator (HAL-TMLE) is an efficient plug-in estimator of a pathwise differentiable parameter in a statistical model that at minimal (and possibly…

### Finite Sample Inference for Targeted Learning

- Mathematics
- 2017

The Highly-Adaptive-Lasso(HAL)-TMLE is an efficient estimator of a pathwise differentiable parameter in a statistical model that at minimal (and possibly only) assumes that the sectional variation…

### Efficient estimation of pathwise differentiable target parameters with the undersmoothed highly adaptive lasso

- MathematicsThe international journal of biostatistics
- 2022

It is established that this Spline-HAL-MLE yields an asymptotically efficient estimator of any smooth feature of the functional parameter under an easily verifiable global undersmoothing condition.

### Inference on function-valued parameters using a restricted score test

- Computer Science, Mathematics
- 2021

This work proposes a general framework that leverages a local parameter of the data-generating mechanism and provides a nonparametric extension of the score test for inference on an infinitedimensional risk minimizer and demonstrates that this framework is applicable in a wide variety of problems.

### 0 A ug 2 01 7 Finite Sample Inference for Targeted Learning

- Mathematics
- 2018

The Highly-Adaptive-Lasso(HAL)-TMLE is an efficient estimator of a pathwise differentiable parameter in a statistical model that at minimal (and possibly only) assumes that the sectional variation…

### Data-adaptive doubly robust instrumental variable methods for treatment effect heterogeneity

- Mathematics, Economics
- 2018

We consider the estimation of the average treatment effect in the treated as a function of baseline covariates, where there is a valid (conditional) instrument.
We describe two doubly robust (DR)…

## References

SHOWING 1-10 OF 33 REFERENCES

### Regression Shrinkage and Selection via the Lasso

- Computer Science
- 1996

A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.

### The Cross-Validated Adaptive Epsilon-Net Estimator

- Mathematics, Computer Science
- 2006

A cross-validated e-net estimation method that uses a collection of submodels and aCollection of e-nets over each submodel to derive a finite sample inequality that shows that the resulting estimator is as good as an oracle estimator that uses the best submodel and resolution level for the unknown true parameter.

### Cross-Validated Targeted Minimum-Loss-Based Estimation

- Mathematics
- 2011

Target maximum likelihood estimation in semiparametric models, which incorporates adaptive estimation of the relevant part of the data-generating distribution and subsequently carries out a targeted bias reduction by maximizing the log-likelihood, over a “clever” parametric working model through the initial estimator.

### Greedy function approximation: A gradient boosting machine.

- Computer Science
- 2001

A general gradient descent boosting paradigm is developed for additive expansions based on any fitting criterion, and specific algorithms are presented for least-squares, least absolute deviation, and Huber-M loss functions for regression, and multiclass logistic likelihood for classification.

### Super Learner

- Computer ScienceStatistical applications in genetics and molecular biology
- 2007

A fast algorithm for constructing a super learner in prediction which uses V-fold cross-validation to select weights to combine an initial set of candidate learners.

### Oracle inequalities for multi-fold cross validation

- Mathematics, Computer Science
- 2006

The results are extended to penalized cross validation in order to control unbounded loss functions and applications include regression with squared and absolute deviation loss and classification under Tsybakov’s condition.

### A Generally Efficient Targeted Minimum Loss Based Estimator

- Mathematics, Computer Science
- 2015

It is established that this one-step TMLE is asymptotically efficient at any data generating distribution in the model, under very weak structural conditions on the target parameter mapping and model.

### Large-Scale Machine Learning with Stochastic Gradient Descent

- Computer ScienceCOMPSTAT
- 2010

A more precise analysis uncovers qualitatively different tradeoffs for the case of small-scale and large-scale learning problems.

### Unified Cross-Validation Methodology For Selection Among Estimators and a General Cross-Validated Adaptive Epsilon-Net Estimator: Finite Sample Oracle Inequalities and Examples

- Mathematics, Computer Science
- 2003

Under general conditions, the optimality results now show that the corresponing cross-validation selector performs asymptotically exactly as well as the selector which for each given data set makes the best choice (knowing the true full data distribution).

### Random Forests

- Computer ScienceMachine Learning
- 2004

Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.