## 30 Citations

### A combined strategy for multivariate density estimation

- MathematicsJournal of Nonparametric Statistics
- 2021

ABSTRACT Non-linear aggregation strategies have recently been proposed in response to the problem of how to combine, in a non-linear way, estimators of the regression function (see for instance…

### A Kernel-based Consensual Aggregation for Regression

- Computer Science
- 2021

An optimization method based on gradient descent algorithm is proposed to efficiently and rapidly estimate the key parameter of the strategy and the improvement of overall performance of the method with the introduction of smoother kernel functions.

### Aggregation using input–output trade-off

- Computer ScienceJournal of Statistical Planning and Inference
- 2019

### A clusterwise supervised learning procedure based on aggregation of distances

- Computer Science
- 2019

A procedure in three steps to automatically solve the problem of aggregating different models adaptively on data by catching the clustering structure of the input data, which may be characterized by several statistical distributions.

### Kernel-Based Ensemble Learning in Python

- Computer ScienceInf.
- 2020

KernelCobra is introduced, a non-linear learning strategy for combining an arbitrary number of initial predictors for classification and regression problems where two or more preliminary predictors are available and systematically outperforms the COBRA algorithm.

### KFC: A clusterwise supervised learning procedure based on the aggregation of distances

- Computer Science
- 2021

A three-step procedure to automatically solve the clustering structure of the input data, which may be characterized by several statistical distributions, and the overall model is computed by a consensual aggregation of the models corresponding to the different partitions.

### A nearest-neighbor-based ensemble classifier and its large-sample optimality

- Mathematics, Computer Science
- 2021

It is shown that the proposed nonparametric approach to combine several individual classifiers in order to construct an asymptotically more accurate classification rule is at least as low as that of the best individual classifier.

### Consensual Aggregation on Random Projected High-dimensional Features for Regression

- Computer ScienceArXiv
- 2022

A kernel-based consensual aggregation on randomly projected high-dimensional features of predictions for regression allows us to merge a large number of redundant machines, plainly constructed without model selection or cross-validation.

### A simple method for combining estimates to improve the overall error rates in classification

- Computer Science, MathematicsComput. Stat.
- 2015

It turns out that the proposed combined classifier is optimal in the sense that its overall misclassification error rate is asymptotically less than (or equal to) that of any one of the individual classifiers.

### Standard errors and confidence intervals for variable importance in random forest regression, classification, and survival

- MathematicsStatistics in medicine
- 2019

This work proposes a subsampling approach that can be used to estimate the variance of VIMP and for constructing confidence intervals and finds that the delete-d jackknife variance estimator, a close cousin, is especially effective under low subsampled rates due to its bias correction properties.

## References

SHOWING 1-10 OF 52 REFERENCES

### Combining Different Procedures for Adaptive Regression

- Computer Science, MathematicsJournal of Multivariate Analysis
- 2000

It is shown by combining various regression procedures that a single estimator can be constructed to be minimax-rate adaptive over Besov classes of unknown smoothness and interaction order, to converge at rate o(n?1/2) when the regression function has a neural net representation, and at the same time to be consistent over all bounded regression functions.

### Adaptive Regression by Mixing

- Computer Science
- 2001

Under mild conditions, it is shown that the squared L2 risk of the estimator based on ARM is basically bounded above by the risk of each candidate procedure plus a small penalty term of order 1/n, giving the automatically optimal rate of convergence for ARM.

### Aggregation for Gaussian regression

- Mathematics, Economics
- 2007

This paper studies statistical aggregation procedures in the regression setting. A motivating factor is the existence of many different methods of estimation, leading to possibly competing…

### Aggregated estimators and empirical complexity for least square regression

- Computer Science, Mathematics
- 2004

### Super Learner

- Computer ScienceStatistical applications in genetics and molecular biology
- 2007

A fast algorithm for constructing a super learner in prediction which uses V-fold cross-validation to select weights to combine an initial set of candidate learners.

### Sequential Procedures for Aggregating Arbitrary Estimators of a Conditional Mean

- MathematicsIEEE Transactions on Information Theory
- 2008

From the cumulative loss bounds, an oracle inequality is derived for the aggregate estimator for an unbounded response having a suitable moment-generating function that readily yields convergence rates for aggregation over the unit simplex that are within logarithmic factors of known minimax bounds.

### Aggregating regression procedures to improve performance

- Economics
- 2004

A fundamental question regarding combining procedures concerns the potential gain and how much one needs to pay for it in terms of statistical risk. Juditsky and Nemirovski considered the case where…

### Model selection via testing: an alternative to (penalized) maximum likelihood estimators

- Mathematics
- 2006

### Analysis of a Random Forests Model

- Computer ScienceJ. Mach. Learn. Res.
- 2012

An in-depth analysis of a random forests model suggested by Breiman (2004), which is very close to the original algorithm, and shows in particular that the procedure is consistent and adapts to sparsity, in the sense that its rate of convergence depends only on the number of strong features and not on how many noise variables are present.

### Sparse single-index model

- MathematicsJ. Mach. Learn. Res.
- 2013

This work considers the single-index model estimation problem from a sparsity perspective using a PAC-Bayesian approach and offers a sharp oracle inequality, which is more powerful than the best known oracle inequalities for other common procedures of single- index recovery.