• Corpus ID: 247315672

Accounting for uncertainty of non-linear regression models by divisive data resorting

@inproceedings{Polar2021AccountingFU,
  title={Accounting for uncertainty of non-linear regression models by divisive data resorting},
  author={Andrew Polar and Michael Poluektov},
  year={2021}
}
This paper focuses on building models of stochastic systems with aleatoric uncertainty. The nature of the considered systems is such that the identical inputs can result in different outputs, i.e. the output is a random variable. This paper sug-gests a novel algorithm of boosted ensemble training of multiple models for obtaining a probability distribution of an individual output as a function of a system input. The deterministic component in the ensemble can be an arbitrarily-chosen regression… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 22 REFERENCES
Linearly Combining Density Estimators via Stacking
TLDR
Stacking is used to form a linear combination of finite mixture model and kernel density estimators for non-parametric multivariate density estimation and outperforms other strategies such as choosing the single best model based on cross-validation, combining with uniform weights, and even using the singlebest model chosen by “Cheating” and examining the test set.
A decision-theoretic generalization of on-line learning and an application to boosting
TLDR
The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and it is shown that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems.
Stacked generalization
Bagging predictors
TLDR
Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy.
Bayesian Learning via Stochastic Dynamics
TLDR
Bayesian methods avoid overfitting and poor generalization by averaging the outputs of many networks with weights sampled from the posterior distribution given the training data, by simulating a stochastic dynamical system that has the posterior as its stationary distribution.
A deep machine learning algorithm for construction of the Kolmogorov-Arnold representation
The Strength of Weak Learnability
TLDR
In this paper, a method is described for converting a weak learning algorithm into one that achieves arbitrarily high accuracy, and it is shown that these two notions of learnability are equivalent.
Analysis of Kolmogorov's superpostion theorem and its implementation in applications with low and high dimensional data
TLDR
This dissertation analyzes Kolmogorov’s superposition theorem for high dimensions and provides a thorough discussion on the proof and its numerical implementation of the theorem in dimension two, and presents high dimensional extensions with complete and detailed proofs.
Modelling of Non-linear Control Systems using the Discrete Urysohn Operator
...
...