# The Selectively Adaptive Lasso

@article{Schuler2022TheSA, title={The Selectively Adaptive Lasso}, author={Alejandro Schuler and Mark J. van der Laan}, journal={ArXiv}, year={2022}, volume={abs/2205.10697} }

Machine learning regression methods allow estimation of functions without unrealistic parametric assumptions. Although they can perform exceptionally in prediction error, most lack theoretical convergence rates necessary for semi-parametric eﬃcient estimation (e.g. TMLE, AIPW) of parameters like average treatment eﬀects. The Highly Adaptive Lasso (HAL) is the only regression method proven to converge quickly enough for a meaningfully large class of functions, independent of the dimensionality…

## Tables from this paper

## References

SHOWING 1-10 OF 26 REFERENCES

### The Highly Adaptive Lasso Estimator

- Computer Science, Mathematics2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA)
- 2016

A novel nonparametric regression estimator is proposed that, in contrast to many existing methods, does not rely on local smoothness assumptions nor is it constructed using local smoothing techniques, and respects global smoothness constraints by virtue of falling in a class of right-hand continuous functions with left-hand limits that have variation norm bounded by a constant.

### Efficient estimation of pathwise differentiable target parameters with the undersmoothed highly adaptive lasso

- MathematicsThe international journal of biostatistics
- 2022

It is established that this Spline-HAL-MLE yields an asymptotically efficient estimator of any smooth feature of the functional parameter under an easily verifiable global undersmoothing condition.

### Multivariate extensions of isotonic regression and total variation denoising via entire monotonicity and Hardy–Krause variation

- Mathematics, Computer Science
- 2019

It is shown that the risk of the entirely monotonic LSE is almost parametric (at most $1/n$ up to logarithmic factors) when the true function is well-approximable by a rectangular piecewise constant entirely Monotone function with not too many constant pieces.

### Fast rates for empirical risk minimization over c\`adl\`ag functions with bounded sectional variation norm

- Mathematics, Computer Science
- 2019

It is shown that in the case of nonparametric regression over sieves of c\`adl\`ag functions with bounded sectional variation norm, this upper bound on the rate of convergence holds for least-squares estimators, under the random design, sub-exponential errors setting.

### Why Machine Learning Cannot Ignore Maximum Likelihood Estimation

- Computer ScienceArXiv
- 2021

It is asserted that one essential idea is for machine learning to integrate maximum likelihood for estimation of functional parameters, such as prediction functions and conditional densities.

### hal9001: Scalable highly adaptive lasso regression in R

- Computer ScienceJ. Open Source Softw.
- 2020

The hal9001 R package provides a computationally efficient implementation of the highly adaptive lasso (HAL), a flexible nonparametric regression and machine learning algorithm endowed with several…

### Semiparametric Theory and Missing Data

- Mathematics, Economics
- 2006

This book summarizes current knowledge regarding the theory of estimation for semiparametric models with missing data, in an organized and comprehensive manner. It starts with the study of…

### Asymptotics of cross-validated risk estimation in estimator selection and performance assessment

- Mathematics, Computer Science
- 2005

### A Generally Efficient Targeted Minimum Loss Based Estimator based on the Highly Adaptive Lasso

- Mathematics, Computer ScienceThe international journal of biostatistics
- 2017

It is established that a one-step TMLE using such a super-learner as initial estimator for each of the nuisance parameters is asymptotically efficient at any data generating distribution in the model, under weak structural conditions on the target parameter mapping and model and a strong positivity assumption.